diff options
| author | Ulf Hansson <ulf.hansson@linaro.org> | 2016-12-08 11:23:49 +0100 |
|---|---|---|
| committer | Ulf Hansson <ulf.hansson@linaro.org> | 2016-12-12 16:30:05 +0100 |
| commit | f397c8d80a5e413984bd9ccdf4161c7156b365ce (patch) | |
| tree | 9767f0418fe8ef53d6571fefc8b7b53558b7fce0 /drivers/mmc/core | |
| parent | ff6af28faff53a7389230026b83e543385f7b21d (diff) | |
| download | linux-f397c8d80a5e413984bd9ccdf4161c7156b365ce.tar.gz linux-f397c8d80a5e413984bd9ccdf4161c7156b365ce.tar.bz2 linux-f397c8d80a5e413984bd9ccdf4161c7156b365ce.zip | |
mmc: block: Move files to core
Once upon a time it made sense to keep the mmc block device driver and its
related code, in its own directory called card. Over time, more an more
functions/structures have become shared through generic mmc header files,
between the core and the card directory. In other words, the relationship
between them has become closer.
By sharing functions/structures via generic header files, it becomes easy
for outside users to abuse them. In a way to avoid that from happen, let's
move the files from card directory into the core directory, as it enables
us to move definitions of functions/structures into mmc core specific
header files.
Note, this is only the first step in providing a cleaner mmc interface for
outside users. Following changes will do the actual cleanup, as that is not
part of this change.
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Diffstat (limited to 'drivers/mmc/core')
| -rw-r--r-- | drivers/mmc/core/Kconfig | 66 | ||||
| -rw-r--r-- | drivers/mmc/core/Makefile | 4 | ||||
| -rw-r--r-- | drivers/mmc/core/block.c | 2336 | ||||
| -rw-r--r-- | drivers/mmc/core/block.h | 1 | ||||
| -rw-r--r-- | drivers/mmc/core/mmc_test.c | 3312 | ||||
| -rw-r--r-- | drivers/mmc/core/queue.c | 489 | ||||
| -rw-r--r-- | drivers/mmc/core/queue.h | 64 | ||||
| -rw-r--r-- | drivers/mmc/core/sdio_uart.c | 1200 |
8 files changed, 7472 insertions, 0 deletions
diff --git a/drivers/mmc/core/Kconfig b/drivers/mmc/core/Kconfig index 250f223aaa80..cdfa8520a4b1 100644 --- a/drivers/mmc/core/Kconfig +++ b/drivers/mmc/core/Kconfig @@ -22,3 +22,69 @@ config PWRSEQ_SIMPLE This driver can also be built as a module. If so, the module will be called pwrseq_simple. + +config MMC_BLOCK + tristate "MMC block device driver" + depends on BLOCK + default y + help + Say Y here to enable the MMC block device driver support. + This provides a block device driver, which you can use to + mount the filesystem. Almost everyone wishing MMC support + should say Y or M here. + +config MMC_BLOCK_MINORS + int "Number of minors per block device" + depends on MMC_BLOCK + range 4 256 + default 8 + help + Number of minors per block device. One is needed for every + partition on the disk (plus one for the whole disk). + + Number of total MMC minors available is 256, so your number + of supported block devices will be limited to 256 divided + by this number. + + Default is 8 to be backwards compatible with previous + hardwired device numbering. + + If unsure, say 8 here. + +config MMC_BLOCK_BOUNCE + bool "Use bounce buffer for simple hosts" + depends on MMC_BLOCK + default y + help + SD/MMC is a high latency protocol where it is crucial to + send large requests in order to get high performance. Many + controllers, however, are restricted to continuous memory + (i.e. they can't do scatter-gather), something the kernel + rarely can provide. + + Say Y here to help these restricted hosts by bouncing + requests back and forth from a large buffer. You will get + a big performance gain at the cost of up to 64 KiB of + physical memory. + + If unsure, say Y here. + +config SDIO_UART + tristate "SDIO UART/GPS class support" + depends on TTY + help + SDIO function driver for SDIO cards that implements the UART + class, as well as the GPS class which appears like a UART. + +config MMC_TEST + tristate "MMC host test driver" + help + Development driver that performs a series of reads and writes + to a memory card in order to expose certain well known bugs + in host controllers. The tests are executed by writing to the + "test" file in debugfs under each card. Note that whatever is + on your card will be overwritten by these tests. + + This driver is only of interest to those developing or + testing a host driver. Most people should say N here. + diff --git a/drivers/mmc/core/Makefile b/drivers/mmc/core/Makefile index f007151dfdc6..b2a257dc644f 100644 --- a/drivers/mmc/core/Makefile +++ b/drivers/mmc/core/Makefile @@ -12,3 +12,7 @@ mmc_core-$(CONFIG_OF) += pwrseq.o obj-$(CONFIG_PWRSEQ_SIMPLE) += pwrseq_simple.o obj-$(CONFIG_PWRSEQ_EMMC) += pwrseq_emmc.o mmc_core-$(CONFIG_DEBUG_FS) += debugfs.o +obj-$(CONFIG_MMC_BLOCK) += mmc_block.o +mmc_block-objs := block.o queue.o +obj-$(CONFIG_MMC_TEST) += mmc_test.o +obj-$(CONFIG_SDIO_UART) += sdio_uart.o diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c new file mode 100644 index 000000000000..646d1a1fa6ca --- /dev/null +++ b/drivers/mmc/core/block.c @@ -0,0 +1,2336 @@ +/* + * Block driver for media (i.e., flash cards) + * + * Copyright 2002 Hewlett-Packard Company + * Copyright 2005-2008 Pierre Ossman + * + * Use consistent with the GNU GPL is permitted, + * provided that this copyright notice is + * preserved in its entirety in all copies and derived works. + * + * HEWLETT-PACKARD COMPANY MAKES NO WARRANTIES, EXPRESSED OR IMPLIED, + * AS TO THE USEFULNESS OR CORRECTNESS OF THIS CODE OR ITS + * FITNESS FOR ANY PARTICULAR PURPOSE. + * + * Many thanks to Alessandro Rubini and Jonathan Corbet! + * + * Author: Andrew Christian + * 28 May 2002 + */ +#include <linux/moduleparam.h> +#include <linux/module.h> +#include <linux/init.h> + +#include <linux/kernel.h> +#include <linux/fs.h> +#include <linux/slab.h> +#include <linux/errno.h> +#include <linux/hdreg.h> +#include <linux/kdev_t.h> +#include <linux/blkdev.h> +#include <linux/mutex.h> +#include <linux/scatterlist.h> +#include <linux/string_helpers.h> +#include <linux/delay.h> +#include <linux/capability.h> +#include <linux/compat.h> +#include <linux/pm_runtime.h> +#include <linux/idr.h> + +#include <linux/mmc/ioctl.h> +#include <linux/mmc/card.h> +#include <linux/mmc/host.h> +#include <linux/mmc/mmc.h> +#include <linux/mmc/sd.h> + +#include <asm/uaccess.h> + +#include "queue.h" +#include "block.h" + +MODULE_ALIAS("mmc:block"); +#ifdef MODULE_PARAM_PREFIX +#undef MODULE_PARAM_PREFIX +#endif +#define MODULE_PARAM_PREFIX "mmcblk." + +#define INAND_CMD38_ARG_EXT_CSD 113 +#define INAND_CMD38_ARG_ERASE 0x00 +#define INAND_CMD38_ARG_TRIM 0x01 +#define INAND_CMD38_ARG_SECERASE 0x80 +#define INAND_CMD38_ARG_SECTRIM1 0x81 +#define INAND_CMD38_ARG_SECTRIM2 0x88 +#define MMC_BLK_TIMEOUT_MS (10 * 60 * 1000) /* 10 minute timeout */ +#define MMC_SANITIZE_REQ_TIMEOUT 240000 +#define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16) + +#define mmc_req_rel_wr(req) ((req->cmd_flags & REQ_FUA) && \ + (rq_data_dir(req) == WRITE)) +static DEFINE_MUTEX(block_mutex); + +/* + * The defaults come from config options but can be overriden by module + * or bootarg options. + */ +static int perdev_minors = CONFIG_MMC_BLOCK_MINORS; + +/* + * We've only got one major, so number of mmcblk devices is + * limited to (1 << 20) / number of minors per device. It is also + * limited by the MAX_DEVICES below. + */ +static int max_devices; + +#define MAX_DEVICES 256 + +static DEFINE_IDA(mmc_blk_ida); +static DEFINE_SPINLOCK(mmc_blk_lock); + +/* + * There is one mmc_blk_data per slot. + */ +struct mmc_blk_data { + spinlock_t lock; + struct device *parent; + struct gendisk *disk; + struct mmc_queue queue; + struct list_head part; + + unsigned int flags; +#define MMC_BLK_CMD23 (1 << 0) /* Can do SET_BLOCK_COUNT for multiblock */ +#define MMC_BLK_REL_WR (1 << 1) /* MMC Reliable write support */ + + unsigned int usage; + unsigned int read_only; + unsigned int part_type; + unsigned int reset_done; +#define MMC_BLK_READ BIT(0) +#define MMC_BLK_WRITE BIT(1) +#define MMC_BLK_DISCARD BIT(2) +#define MMC_BLK_SECDISCARD BIT(3) + + /* + * Only set in main mmc_blk_data associated + * with mmc_card with dev_set_drvdata, and keeps + * track of the current selected device partition. + */ + unsigned int part_curr; + struct device_attribute force_ro; + struct device_attribute power_ro_lock; + int area_type; +}; + +static DEFINE_MUTEX(open_lock); + +module_param(perdev_minors, int, 0444); +MODULE_PARM_DESC(perdev_minors, "Minors numbers to allocate per device"); + +static inline int mmc_blk_part_switch(struct mmc_card *card, + struct mmc_blk_data *md); +static int get_card_status(struct mmc_card *card, u32 *status, int retries); + +static struct mmc_blk_data *mmc_blk_get(struct gendisk *disk) +{ + struct mmc_blk_data *md; + + mutex_lock(&open_lock); + md = disk->private_data; + if (md && md->usage == 0) + md = NULL; + if (md) + md->usage++; + mutex_unlock(&open_lock); + + return md; +} + +static inline int mmc_get_devidx(struct gendisk *disk) +{ + int devidx = disk->first_minor / perdev_minors; + return devidx; +} + +static void mmc_blk_put(struct mmc_blk_data *md) +{ + mutex_lock(&open_lock); + md->usage--; + if (md->usage == 0) { + int devidx = mmc_get_devidx(md->disk); + blk_cleanup_queue(md->queue.queue); + + spin_lock(&mmc_blk_lock); + ida_remove(&mmc_blk_ida, devidx); + spin_unlock(&mmc_blk_lock); + + put_disk(md->disk); + kfree(md); + } + mutex_unlock(&open_lock); +} + +static ssize_t power_ro_lock_show(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int ret; + struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); + struct mmc_card *card = md->queue.card; + int locked = 0; + + if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PERM_WP_EN) + locked = 2; + else if (card->ext_csd.boot_ro_lock & EXT_CSD_BOOT_WP_B_PWR_WP_EN) + locked = 1; + + ret = snprintf(buf, PAGE_SIZE, "%d\n", locked); + + mmc_blk_put(md); + + return ret; +} + +static ssize_t power_ro_lock_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + int ret; + struct mmc_blk_data *md, *part_md; + struct mmc_card *card; + unsigned long set; + + if (kstrtoul(buf, 0, &set)) + return -EINVAL; + + if (set != 1) + return count; + + md = mmc_blk_get(dev_to_disk(dev)); + card = md->queue.card; + + mmc_get_card(card); + + ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, EXT_CSD_BOOT_WP, + card->ext_csd.boot_ro_lock | + EXT_CSD_BOOT_WP_B_PWR_WP_EN, + card->ext_csd.part_time); + if (ret) + pr_err("%s: Locking boot partition ro until next power on failed: %d\n", md->disk->disk_name, ret); + else + card->ext_csd.boot_ro_lock |= EXT_CSD_BOOT_WP_B_PWR_WP_EN; + + mmc_put_card(card); + + if (!ret) { + pr_info("%s: Locking boot partition ro until next power on\n", + md->disk->disk_name); + set_disk_ro(md->disk, 1); + + list_for_each_entry(part_md, &md->part, part) + if (part_md->area_type == MMC_BLK_DATA_AREA_BOOT) { + pr_info("%s: Locking boot partition ro until next power on\n", part_md->disk->disk_name); + set_disk_ro(part_md->disk, 1); + } + } + + mmc_blk_put(md); + return count; +} + +static ssize_t force_ro_show(struct device *dev, struct device_attribute *attr, + char *buf) +{ + int ret; + struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); + + ret = snprintf(buf, PAGE_SIZE, "%d\n", + get_disk_ro(dev_to_disk(dev)) ^ + md->read_only); + mmc_blk_put(md); + return ret; +} + +static ssize_t force_ro_store(struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + int ret; + char *end; + struct mmc_blk_data *md = mmc_blk_get(dev_to_disk(dev)); + unsigned long set = simple_strtoul(buf, &end, 0); + if (end == buf) { + ret = -EINVAL; + goto out; + } + + set_disk_ro(dev_to_disk(dev), set || md->read_only); + ret = count; +out: + mmc_blk_put(md); + return ret; +} + +static int mmc_blk_open(struct block_device *bdev, fmode_t mode) +{ + struct mmc_blk_data *md = mmc_blk_get(bdev->bd_disk); + int ret = -ENXIO; + + mutex_lock(&block_mutex); + if (md) { + if (md->usage == 2) + check_disk_change(bdev); + ret = 0; + + if ((mode & FMODE_WRITE) && md->read_only) { + mmc_blk_put(md); + ret = -EROFS; + } + } + mutex_unlock(&block_mutex); + + return ret; +} + +static void mmc_blk_release(struct gendisk *disk, fmode_t mode) +{ + struct mmc_blk_data *md = disk->private_data; + + mutex_lock(&block_mutex); + mmc_blk_put(md); + mutex_unlock(&block_mutex); +} + +static int +mmc_blk_getgeo(struct block_device *bdev, struct hd_geometry *geo) +{ + geo->cylinders = get_capacity(bdev->bd_disk) / (4 * 16); + geo->heads = 4; + geo->sectors = 16; + return 0; +} + +struct mmc_blk_ioc_data { + struct mmc_ioc_cmd ic; + unsigned char *buf; + u64 buf_bytes; +}; + +static struct mmc_blk_ioc_data *mmc_blk_ioctl_copy_from_user( + struct mmc_ioc_cmd __user *user) +{ + struct mmc_blk_ioc_data *idata; + int err; + + idata = kmalloc(sizeof(*idata), GFP_KERNEL); + if (!idata) { + err = -ENOMEM; + goto out; + } + + if (copy_from_user(&idata->ic, user, sizeof(idata->ic))) { + err = -EFAULT; + goto idata_err; + } + + idata->buf_bytes = (u64) idata->ic.blksz * idata->ic.blocks; + if (idata->buf_bytes > MMC_IOC_MAX_BYTES) { + err = -EOVERFLOW; + goto idata_err; + } + + if (!idata->buf_bytes) { + idata->buf = NULL; + return idata; + } + + idata->buf = kmalloc(idata->buf_bytes, GFP_KERNEL); + if (!idata->buf) { + err = -ENOMEM; + goto idata_err; + } + + if (copy_from_user(idata->buf, (void __user *)(unsigned long) + idata->ic.data_ptr, idata->buf_bytes)) { + err = -EFAULT; + goto copy_err; + } + + return idata; + +copy_err: + kfree(idata->buf); +idata_err: + kfree(idata); +out: + return ERR_PTR(err); +} + +static int mmc_blk_ioctl_copy_to_user(struct mmc_ioc_cmd __user *ic_ptr, + struct mmc_blk_ioc_data *idata) +{ + struct mmc_ioc_cmd *ic = &idata->ic; + + if (copy_to_user(&(ic_ptr->response), ic->response, + sizeof(ic->response))) + return -EFAULT; + + if (!idata->ic.write_flag) { + if (copy_to_user((void __user *)(unsigned long)ic->data_ptr, + idata->buf, idata->buf_bytes)) + return -EFAULT; + } + + return 0; +} + +static int ioctl_rpmb_card_status_poll(struct mmc_card *card, u32 *status, + u32 retries_max) +{ + int err; + u32 retry_count = 0; + + if (!status || !retries_max) + return -EINVAL; + + do { + err = get_card_status(card, status, 5); + if (err) + break; + + if (!R1_STATUS(*status) && + (R1_CURRENT_STATE(*status) != R1_STATE_PRG)) + break; /* RPMB programming operation complete */ + + /* + * Rechedule to give the MMC device a chance to continue + * processing the previous command without being polled too + * frequently. + */ + usleep_range(1000, 5000); + } while (++retry_count < retries_max); + + if (retry_count == retries_max) + err = -EPERM; + + return err; +} + +static int ioctl_do_sanitize(struct mmc_card *card) +{ + int err; + + if (!mmc_can_sanitize(card)) { + pr_warn("%s: %s - SANITIZE is not supported\n", + mmc_hostname(card->host), __func__); + err = -EOPNOTSUPP; + goto out; + } + + pr_debug("%s: %s - SANITIZE IN PROGRESS...\n", + mmc_hostname(card->host), __func__); + + err = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, + EXT_CSD_SANITIZE_START, 1, + MMC_SANITIZE_REQ_TIMEOUT); + + if (err) + pr_err("%s: %s - EXT_CSD_SANITIZE_START failed. err=%d\n", + mmc_hostname(card->host), __func__, err); + + pr_debug("%s: %s - SANITIZE COMPLETED\n", mmc_hostname(card->host), + __func__); +out: + return err; +} + +static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md, + struct mmc_blk_ioc_data *idata) +{ + struct mmc_command cmd = {0}; + struct mmc_data data = {0}; + struct mmc_request mrq = {NULL}; + struct scatterlist sg; + int err; + int is_rpmb = false; + u32 status = 0; + + if (!card || !md || !idata) + return -EINVAL; + + if (md->area_type & MMC_BLK_DATA_AREA_RPMB) + is_rpmb = true; + + cmd.opcode = idata->ic.opcode; + cmd.arg = idata->ic.arg; + cmd.flags = idata->ic.flags; + + if (idata->buf_bytes) { + data.sg = &sg; + data.sg_len = 1; + data.blksz = idata->ic.blksz; + data.blocks = idata->ic.blocks; + + sg_init_one(data.sg, idata->buf, idata->buf_bytes); + + if (idata->ic.write_flag) + data.flags = MMC_DATA_WRITE; + else + data.flags = MMC_DATA_READ; + + /* data.flags must already be set before doing this. */ + mmc_set_data_timeout(&data, card); + + /* Allow overriding the timeout_ns for empirical tuning. */ + if (idata->ic.data_timeout_ns) + data.timeout_ns = idata->ic.data_timeout_ns; + + if ((cmd.flags & MMC_RSP_R1B) == MMC_RSP_R1B) { + /* + * Pretend this is a data transfer and rely on the + * host driver to compute timeout. When all host + * drivers support cmd.cmd_timeout for R1B, this + * can be changed to: + * + * mrq.data = NULL; + * cmd.cmd_timeout = idata->ic.cmd_timeout_ms; + */ + data.timeout_ns = idata->ic.cmd_timeout_ms * 1000000; + } + + mrq.data = &data; + } + + mrq.cmd = &cmd; + + err = mmc_blk_part_switch(card, md); + if (err) + return err; + + if (idata->ic.is_acmd) { + err = mmc_app_cmd(card->host, card); + if (err) + return err; + } + + if (is_rpmb) { + err = mmc_set_blockcount(card, data.blocks, + idata->ic.write_flag & (1 << 31)); + if (err) + return err; + } + + if ((MMC_EXTRACT_INDEX_FROM_ARG(cmd.arg) == EXT_CSD_SANITIZE_START) && + (cmd.opcode == MMC_SWITCH)) { + err = ioctl_do_sanitize(card); + + if (err) + pr_err("%s: ioctl_do_sanitize() failed. err = %d", + __func__, err); + + return err; + } + + mmc_wait_for_req(card->host, &mrq); + + if (cmd.error) { + dev_err(mmc_dev(card->host), "%s: cmd error %d\n", + __func__, cmd.error); + return cmd.error; + } + if (data.error) { + dev_err(mmc_dev(card->host), "%s: data error %d\n", + __func__, data.error); + return data.error; + } + + /* + * According to the SD specs, some commands require a delay after + * issuing the command. + */ + if (idata->ic.postsleep_min_us) + usleep_range(idata->ic.postsleep_min_us, idata->ic.postsleep_max_us); + + memcpy(&(idata->ic.response), cmd.resp, sizeof(cmd.resp)); + + if (is_rpmb) { + /* + * Ensure RPMB command has completed by polling CMD13 + * "Send Status". + */ + err = ioctl_rpmb_card_status_poll(card, &status, 5); + if (err) + dev_err(mmc_dev(card->host), + "%s: Card Status=0x%08X, error %d\n", + __func__, status, err); + } + + return err; +} + +static int mmc_blk_ioctl_cmd(struct block_device *bdev, + struct mmc_ioc_cmd __user *ic_ptr) +{ + struct mmc_blk_ioc_data *idata; + struct mmc_blk_data *md; + struct mmc_card *card; + int err = 0, ioc_err = 0; + + /* + * The caller must have CAP_SYS_RAWIO, and must be calling this on the + * whole block device, not on a partition. This prevents overspray + * between sibling partitions. + */ + if ((!capable(CAP_SYS_RAWIO)) || (bdev != bdev->bd_contains)) + return -EPERM; + + idata = mmc_blk_ioctl_copy_from_user(ic_ptr); + if (IS_ERR(idata)) + return PTR_ERR(idata); + + md = mmc_blk_get(bdev->bd_disk); + if (!md) { + err = -EINVAL; + goto cmd_err; + } + + card = md->queue.card; + if (IS_ERR(card)) { + err = PTR_ERR(card); + goto cmd_done; + } + + mmc_get_card(card); + + ioc_err = __mmc_blk_ioctl_cmd(card, md, idata); + + /* Always switch back to main area after RPMB access */ + if (md->area_type & MMC_BLK_DATA_AREA_RPMB) + mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); + + mmc_put_card(card); + + err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata); + +cmd_done: + mmc_blk_put(md); +cmd_err: + kfree(idata->buf); + kfree(idata); + return ioc_err ? ioc_err : err; +} + +static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev, + struct mmc_ioc_multi_cmd __user *user) +{ + struct mmc_blk_ioc_data **idata = NULL; + struct mmc_ioc_cmd __user *cmds = user->cmds; + struct mmc_card *card; + struct mmc_blk_data *md; + int i, err = 0, ioc_err = 0; + __u64 num_of_cmds; + + /* + * The caller must have CAP_SYS_RAWIO, and must be calling this on the + * whole block device, not on a partition. This prevents overspray + * between sibling partitions. + */ + if ((!capable(CAP_SYS_RAWIO)) || (bdev != bdev->bd_contains)) + return -EPERM; + + if (copy_from_user(&num_of_cmds, &user->num_of_cmds, + sizeof(num_of_cmds))) + return -EFAULT; + + if (num_of_cmds > MMC_IOC_MAX_CMDS) + return -EINVAL; + + idata = kcalloc(num_of_cmds, sizeof(*idata), GFP_KERNEL); + if (!idata) + return -ENOMEM; + + for (i = 0; i < num_of_cmds; i++) { + idata[i] = mmc_blk_ioctl_copy_from_user(&cmds[i]); + if (IS_ERR(idata[i])) { + err = PTR_ERR(idata[i]); + num_of_cmds = i; + goto cmd_err; + } + } + + md = mmc_blk_get(bdev->bd_disk); + if (!md) { + err = -EINVAL; + goto cmd_err; + } + + card = md->queue.card; + if (IS_ERR(card)) { + err = PTR_ERR(card); + goto cmd_done; + } + + mmc_get_card(card); + + for (i = 0; i < num_of_cmds && !ioc_err; i++) + ioc_err = __mmc_blk_ioctl_cmd(card, md, idata[i]); + + /* Always switch back to main area after RPMB access */ + if (md->area_type & MMC_BLK_DATA_AREA_RPMB) + mmc_blk_part_switch(card, dev_get_drvdata(&card->dev)); + + mmc_put_card(card); + + /* copy to user if data and response */ + for (i = 0; i < num_of_cmds && !err; i++) + err = mmc_blk_ioctl_copy_to_user(&cmds[i], idata[i]); + +cmd_done: + mmc_blk_put(md); +cmd_err: + for (i = 0; i < num_of_cmds; i++) { + kfree(idata[i]->buf); + kfree(idata[i]); + } + kfree(idata); + return ioc_err ? ioc_err : err; +} + +static int mmc_blk_ioctl(struct block_device *bdev, fmode_t mode, + unsigned int cmd, unsigned long arg) +{ + switch (cmd) { + case MMC_IOC_CMD: + return mmc_blk_ioctl_cmd(bdev, + (struct mmc_ioc_cmd __user *)arg); + case MMC_IOC_MULTI_CMD: + return mmc_blk_ioctl_multi_cmd(bdev, + (struct mmc_ioc_multi_cmd __user *)arg); + default: + return -EINVAL; + } +} + +#ifdef CONFIG_COMPAT +static int mmc_blk_compat_ioctl(struct block_device *bdev, fmode_t mode, + unsigned int cmd, unsigned long arg) +{ + return mmc_blk_ioctl(bdev, mode, cmd, (unsigned long) compat_ptr(arg)); +} +#endif + +static const struct block_device_operations mmc_bdops = { + .open = mmc_blk_open, + .release = mmc_blk_release, + .getgeo = mmc_blk_getgeo, + .owner = THIS_MODULE, + .ioctl = mmc_blk_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = mmc_blk_compat_ioctl, +#endif +}; + +static inline int mmc_blk_part_switch(struct mmc_card *card, + struct mmc_blk_data *md) +{ + int ret; + struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev); + + if (main_md->part_curr == md->part_type) + return 0; + + if (mmc_card_mmc(card)) { + u8 part_config = card->ext_csd.part_config; + + if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) + mmc_retune_pause(card->host); + + part_config &= ~EXT_CSD_PART_CONFIG_ACC_MASK; + part_config |= md->part_type; + + ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL, + EXT_CSD_PART_CONFIG, part_config, + card->ext_csd.part_time); + if (ret) { + if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) + mmc_retune_unpause(card->host); + return ret; + } + + card->ext_csd.part_config = part_config; + + if (main_md->part_curr == EXT_CSD_PART_CONFIG_ACC_RPMB) + mmc_retune_unpause(card->host); + } + + main_md->part_curr = md->part_type; + return 0; +} + +static u32 mmc_sd_num_wr_blocks(struct mmc_card *card) +{ + int err; + u32 result; + __be32 *blocks; + + struct mmc_request mrq = {NULL}; + struct mmc_command cmd = {0}; + struct mmc_data data = {0}; + + struct scatterlist sg; + + cmd.opcode = MMC_APP_CMD; + cmd.arg = card->rca << 16; + cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC; + + err = mmc_wait_for_cmd(card->host, &cmd, 0); + if (err) + return (u32)-1; + if (!mmc_host_is_spi(card->host) && !(cmd.resp[0] & R1_APP_CMD)) + return (u32)-1; + + memset(&cmd, 0, sizeof(struct mmc_command)); + + cmd.opcode = SD_APP_SEND_NUM_WR_BLKS; + cmd.arg = 0; + cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC; + + data.blksz = 4; + data.blocks = 1; + data.flags = MMC_DATA_READ; + data.sg = &sg; + data.sg_len = 1; + mmc_set_data_timeout(&data, card); + + mrq.cmd = &cmd; + mrq.data = &data; + + blocks = kmalloc(4, GFP_KERNEL); + if (!blocks) + return (u32)-1; + + sg_init_one(&sg, blocks, 4); + + mmc_wait_for_req(card->host, &mrq); + + result = ntohl(*blocks); + kfree(blocks); + + if (cmd.error || data.error) + result = (u32)-1; + + return result; +} + +static int get_card_status(struct mmc_card *card, u32 *status, int retries) +{ + struct mmc_command cmd = {0}; + int err; + + cmd.opcode = MMC_SEND_STATUS; + if (!mmc_host_is_spi(card->host)) + cmd.arg = card->rca << 16; + cmd.flags = MMC_RSP_SPI_R2 | MMC_RSP_R1 | MMC_CMD_AC; + err = mmc_wait_for_cmd(card->host, &cmd, retries); + if (err == 0) + *status = cmd.resp[0]; + return err; +} + +static int card_busy_detect(struct mmc_card *card, unsigned int timeout_ms, + bool hw_busy_detect, struct request *req, bool *gen_err) +{ + unsigned long timeout = jiffies + msecs_to_jiffies(timeout_ms); + int err = 0; + u32 status; + + do { + err = get_card_status(card, &status, 5); + if (err) { + pr_err("%s: error %d requesting status\n", + req->rq_disk->disk_name, err); + return err; + } + + if (status & R1_ERROR) { + pr_err("%s: %s: error sending status cmd, status %#x\n", + req->rq_disk->disk_name, __func__, status); + *gen_err = true; + } + + /* We may rely on the host hw to handle busy detection.*/ + if ((card->host->caps & MMC_CAP_WAIT_WHILE_BUSY) && + hw_busy_detect) + break; + + /* + * Timeout if the device never becomes ready for data and never + * leaves the program state. + */ + if (time_after(jiffies, timeout)) { + pr_err("%s: Card stuck in programming state! %s %s\n", + mmc_hostname(card->host), + req->rq_disk->disk_name, __func__); + return -ETIMEDOUT; + } + + /* + * Some cards mishandle the status bits, + * so make sure to check both the busy + * indication and the card state. + */ + } while (!(status & R1_READY_FOR_DATA) || + (R1_CURRENT_STATE(status) == R1_STATE_PRG)); + + return err; +} + +static int send_stop(struct mmc_card *card, unsigned int timeout_ms, + struct request *req, bool *gen_err, u32 *stop_status) +{ + struct mmc_host *host = card->host; + struct mmc_command cmd = {0}; + int err; + bool use_r1b_resp = rq_data_dir(req) == WRITE; + + /* + * Normally we use R1B responses for WRITE, but in cases where the host + * has specified a max_busy_timeout we need to validate it. A failure + * means we need to prevent the host from doing hw busy detection, which + * is done by converting to a R1 response instead. + */ + if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout)) + use_r1b_resp = false; + + cmd.opcode = MMC_STOP_TRANSMISSION; + if (use_r1b_resp) { + cmd.flags = MMC_RSP_SPI_R1B | MMC_RSP_R1B | MMC_CMD_AC; + cmd.busy_timeout = timeout_ms; + } else { + cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC; + } + + err = mmc_wait_for_cmd(host, &cmd, 5); + if (err) + return err; + + *stop_status = cmd.resp[0]; + + /* No need to check card status in case of READ. */ + if (rq_data_dir(req) == READ) + return 0; + + if (!mmc_host_is_spi(host) && + (*stop_status & R1_ERROR)) { + pr_err("%s: %s: general error sending stop command, resp %#x\n", + req->rq_disk->disk_name, __func__, *stop_status); + *gen_err = true; + } + + return card_busy_detect(card, timeout_ms, use_r1b_resp, req, gen_err); +} + +#define ERR_NOMEDIUM 3 +#define ERR_RETRY 2 +#define ERR_ABORT 1 +#define ERR_CONTINUE 0 + +static int mmc_blk_cmd_error(struct request *req, const char *name, int error, + bool status_valid, u32 status) +{ + switch (error) { + case -EILSEQ: + /* response crc error, retry the r/w cmd */ + pr_err("%s: %s sending %s command, card status %#x\n", + req->rq_disk->disk_name, "response CRC error", + name, status); + return ERR_RETRY; + + case -ETIMEDOUT: + pr_err("%s: %s sending %s command, card status %#x\n", + req->rq_disk->disk_name, "timed out", name, status); + + /* If the status cmd initially failed, retry the r/w cmd */ + if (!status_valid) { + pr_err("%s: status not valid, retrying timeout\n", + req->rq_disk->disk_name); + return ERR_RETRY; + } + + /* + * If it was a r/w cmd crc error, or illegal command + * (eg, issued in wrong state) then retry - we should + * have corrected the state problem above. + */ + if (status & (R1_COM_CRC_ERROR | R1_ILLEGAL_COMMAND)) { + pr_err("%s: command error, retrying timeout\n", + req->rq_disk->disk_name); + return ERR_RETRY; + } + + /* Otherwise abort the command */ + return ERR_ABORT; + + default: + /* We don't understand the error code the driver gave us */ + pr_err("%s: unknown error %d sending read/write command, card status %#x\n", + req->rq_disk->disk_name, error, status); + return ERR_ABORT; + } +} + +/* + * Initial r/w and stop cmd error recovery. + * We don't know whether the card received the r/w cmd or not, so try to + * restore things back to a sane state. Essentially, we do this as follows: + * - Obtain card status. If the first attempt to obtain card status fails, + * the status word will reflect the failed status cmd, not the failed + * r/w cmd. If we fail to obtain card status, it suggests we can no + * longer communicate with the card. + * - Check the card state. If the card received the cmd but there was a + * transient problem with the response, it might still be in a data transfer + * mode. Try to send it a stop command. If this fails, we can't recover. + * - If the r/w cmd failed due to a response CRC error, it was probably + * transient, so retry the cmd. + * - If the r/w cmd timed out, but we didn't get the r/w cmd status, retry. + * - If the r/w cmd timed out, and the r/w cmd failed due to CRC error or + * illegal cmd, retry. + * Otherwise we don't understand what happened, so abort. + */ +static int mmc_blk_cmd_recovery(struct mmc_card *card, struct request *req, + struct mmc_blk_request *brq, b |
