Linux虚拟化原理笔记_bdrv-CSDN博客

virtio-blk 本身也是一个 kernel module。

  • The virtio-net device was assigned IRQ 36,
  • the virtio-blk devices were assigned 37 and 38.

Let’s walk through the conceptual path that the virtio block driver traverses to do a single block read, using virtio ring as the example transport.

To begin with, the guest has an empty buffer which the data will be read into. We allocate

  • a struct virtio_blk_outhdr with the request metadata, and
  • a single byte to receive the status (success or fail).

We put these three parts (empty buffer, virtio_blk_outhdr, single byte) of our request into three free entries of the descriptor table and chain them together. The virtio_blk_outhdr is read-only, and the empty buffer and status byte are write-only.

                            Descriptor Table            Available             Used
                            ┌───┬───┬───┬───┐           ┌────────┐          ┌────────┐
            ┌────────────┐  │   │   │   │   │           │        │          │        │
            │Empty Buffer│◄─┤   │   │ R │   │           │        │          │        │
            └────────────┘  │   │   │   │   ├───┐       │        │          │        │
                            ├───┼───┼───┼───┤   │       ├────────┤          ├────────┤
┌────────────────────────┐  │   │   │   │   │◄──┘       │        │          │        │
│struct virtio-blk-outhdr│◄─┤   │   │ W │   │           │        │          │        │
└────────────────────────┘  │   │   │   │   ├───┐       │        │          │        │
                            ├───┼───┼───┼───┤   │       ├────────┤          ├────────┤
                  ┌──────┐  │   │   │   │   │◄──┘       │        │          │        │
                  │Status│◄─┤   │   │ W │   │           │        │          │        │
                  └──────┘  │   │   │   │   │           │        │          │        │
                            ├───┼───┼───┼───┤           ├────────┤          ├────────┤
                            │   │   │ . │   │           │        │          │        │
                            │   │   │ . │   │           │        │          │        │
                            │   │   │ . │   │           │        │          │        │
                            └───┴───┴───┴───┘           └────────┘          └────────┘

Once this is done, the descriptor is ready to be marked available. This is done by placing the index of the descriptor head into the “available” ring, then incrementing the available index:

                            Descriptor Table            Available             Used
                            ┌───┬───┬───┬───┐           ┌────────┐          ┌────────┐
            ┌────────────┐  │   │   │   │   │◄──────────┤        │          │        │
            │Empty Buffer│◄─┤   │   │ R │   │           │        │          │        │
            └────────────┘  │   │   │   │   ├───┐       │        │          │        │
                            ├───┼───┼───┼───┤   │       ├────────┤          ├────────┤
┌────────────────────────┐  │   │   │   │   │◄──┘       │        │          │        │
│struct virtio-blk-outhdr│◄─┤   │   │ W │   │           │        │          │        │
└────────────────────────┘  │   │   │   │   ├───┐       │        │          │        │
                            ├───┼───┼───┼───┤   │       ├────────┤          ├────────┤
                  ┌──────┐  │   │   │   │   │◄──┘       │        │          │        │
                  │Status│◄─┤   │   │ W │   │           │        │          │        │
                  └──────┘  │   │   │   │   │           │        │          │        │
                            ├───┼───┼───┼───┤           ├────────┤          ├────────┤
                            │   │   │ . │   │           │        │          │        │
                            │   │   │ . │   │           │        │          │        │
                            │   │   │ . │   │           │        │          │        │
                            └───┴───┴───┴───┘           └────────┘          └────────┘

A “kick” is issued to notify the host that a request is pending. At some point in the future, the request will be completed as

  • the buffer is filled and
  • the status byte updated to indicate success.

At this point the descriptor head is returned in the used ring and the guest is notified (ie. interrupted). The block driver callback which runs does get_buf repeatedly to see which requests have been finished, until get_buf returns NULL.

#                                 Descriptor Table            Available             Used
#                                 ┌───┬───┬───┬───┐           ┌────────┐          ┌────────┐
#                ┌─────────────┐  │   │   │   │   │◄───▲──────┤        │          │        │
#                │Filled Buffer│◄─┤   │   │ R │   │    └──────┼────────┼──────────┼────    │
#                └─────────────┘  │   │   │   │   ├───┐       │        │          │        │
#                                 ├───┼───┼───┼───┤   │       ├────────┤          ├────────┤
#     ┌────────────────────────┐  │   │   │   │   │◄──┘       │        │          │        │
#     │struct virtio-blk-outhdr│◄─┤   │   │ W │   │           │        │          │        │
#     └────────────────────────┘  │   │   │   │   ├───┐       │        │          │        │
#                                 ├───┼───┼───┼───┤   │       ├────────┤          ├────────┤
#                       ┌──────┐  │   │   │   │   │◄──┘       │        │          │        │
#                       │Status│◄─┤   │   │ W │   │           │        │          │        │
#                       └──────┘  │   │   │   │   │           │        │          │        │
#                                 ├───┼───┼───┼───┤           ├────────┤          ├────────┤
#                                 │   │   │ . │   │           │        │          │        │
#                                 │   │   │ . │   │           │        │          │        │
#                                 │   │   │ . │   │           │        │          │        │
#                                 └───┴───┴───┴───┘           └────────┘          └────────┘

struct virtio_blk_outhdr Kernel

看起来是和一个 request 对应,是 request 的一部分(还包含 status 和 buffer)。

/*
 * This comes first in the read scatter-gather list.
 * For legacy virtio, if VIRTIO_F_ANY_LAYOUT is not negotiated,
 * this is the first element of the read scatter-gather list.
 */
struct virtio_blk_outhdr {
	/* VIRTIO_BLK_T* */
    //  - VIRTIO_BLK_T_GET_ID
    //  - VIRTIO_BLK_T_OUT
	__virtio32 type;
	/* io priority. */
	__virtio32 ioprio;
	/* Sector (ie. 512 byte offset) */
	__virtio64 sector;
};

Virtio-blk and zero-copy

  1. 在虚拟机里面,应用层调用 write 系统调用写入文件。
  2. write 系统调用进入虚拟机里面的内核,经过 VFS、通用块设备层、I/O 调度层,到达块设备驱动。
  3. 虚拟机里面的块设备驱动是 virtio_blk,它和通用的块设备驱动一样有一个 request queue,另外有一个函数 make_request_fn 会被设置为 blk_mq_make_request(),这个函数用于将请求放入队列。
  4. 虚拟机里面的块设备驱动是 virtio_blk,会注册一个中断处理函数 vp_interrupt()。当 qemu 写入完成之后,它会通知虚拟机里面的块设备驱动。
  5. blk_mq_make_request() 最终调用 virtqueue_add(),将请求添加到传输队列 virtqueue 中,然后调用 virtqueue_notify() 通知 qemu。
  6. 在 qemu 中,本来虚拟机正处于 KVM_RUN 的状态即处于客户机状态。
  7. qemu 收到通知后,通过 VM exit 指令退出客户机状态进入宿主机状态,根据退出原因得知有 I/O 需要处理。
  8. qemu 调用 virtio_blk_handle_output(),最终调用 virtio_blk_handle_vq()
  9. virtio_blk_handle_vq() 里面有一个循环,在循环中 virtio_blk_get_request() 函数从传输队列中拿出请求,然后调用 virtio_blk_handle_request() 处理请求。
  10. virtio_blk_handle_request() 会调用 blk_aio_pwritev(),通过 BlockBackend 驱动写入 qcow2 文件。
  11. 写入完毕之后,virtio_blk_req_complete() 会调用 virtio_notify() 通知虚拟机里面的驱动,数据写入完成,刚才注册的中断处理函数 vp_interrupt() 会收到这个通知。

因为 IO 是基于 IO Vector 的,我们可以看以下代码这个是怎么设置的,最后可以看出来 QEMU 中的 device backend 直接把 guest driver 传过来的地址作为 iovec 传给 host 文件系统层做 IO 去了,也就是会直接 IO 到 guest 指定的内存中去,不需要在 device 中中转

应该是 qcow2 作为 BlockBackend 直接接受了请求,将虚拟的 qcow2 磁盘地址转化成为 qcow2 文件的 offset 进行 IO 了。所以每一个 QEMU 里的 virtual device 代码还都是对应一个虚拟设备的,比如 qcow2 作为一个虚拟 block 设备,tun/tap 作为一个虚拟的网络设备。

// 重点看 mrb 这个变量
virtio_blk_handle_vq
    MultiReqBuffer mrb = {};
    req = virtio_blk_get_request(s, vq)
    // 此时 req->qiov 还没有被设置
    virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
        // 我们先从 req 里拿到地址
        struct iovec *in_iov = req->elem.in_sg;
        struct iovec *out_iov = req->elem.out_sg;
        // 把 out_iov 写到 req->qiov 中
        qemu_iovec_init_external(&req->qiov, out_iov, out_num);
        mrb->reqs[mrb->num_reqs++] = req;
        mrb->is_write = is_write;
        submit_requests(VirtIOBlock *s, MultiReqBuffer *mrb, int start, int num_reqs, int niov)
            QEMUIOVector *qiov = &mrb->reqs[start]->qiov;
            blk_aio_pwritev(blk, sector_num << BDRV_SECTOR_BITS, qiov, flags, virtio_blk_rw_complete, mrb->reqs[start]);

virtblk_add_req() Guest kernel

Guest driver 给 device 发送一个请求。这个请求包含很多个 buffer,是一个 descriptor chain。

static int virtblk_add_req(struct virtqueue *vq, struct virtblk_req *vbr)
{
	struct scatterlist out_hdr, in_hdr, *sgs[3];
	unsigned int num_out = 0, num_in = 0;

	sg_init_one(&out_hdr, &vbr->out_hdr, sizeof(vbr->out_hdr));
	sgs[num_out++] = &out_hdr;

	if (vbr->sg_table.nents) {
		if (vbr->out_hdr.type & cpu_to_virtio32(vq->vdev, VIRTIO_BLK_T_OUT))
			sgs[num_out++] = vbr->sg_table.sgl;
		else
			sgs[num_out + num_in++] = vbr->sg_table.sgl;
	}

	sg_init_one(&in_hdr, &vbr->in_hdr.status, vbr->in_hdr_len);
	sgs[num_out + num_in++] = &in_hdr;

	return virtqueue_add_sgs(vq, sgs, num_out, num_in, vbr, GFP_ATOMIC);
}

Virtio-blk feature bits negotiation

整个 negotiation 的过程:

  • Device 告诉 Driver 有哪些 feature 是支持的;
  • Driver 把这些 feature 和自己想要设置的进行一个与;
  • 然后把与之后的 feature 告诉 Device 进行设置。

Device prepares features / virtio_blk_get_features() QEMU

最后都放到了 vdev->host_features 里面了。

features 有很多种:

  • VirtIO device 通用的 feature
  • VirtIO BLK 设备自有的 feature
// QEMU (backend device)
virtio_device_realize
    virtio_bus_device_plugged
        // virtio 设备通用的 feature
        klass->pre_plugged(qbus->parent, &local_err); // virtio_pci_pre_plugged
            virtio_add_feature(&vdev->host_features, VIRTIO_F_VERSION_1);
            virtio_add_feature(&vdev->host_features, VIRTIO_F_BAD_FEATURE);
        // 目前为止都是 virtio general 的 code
        // vdev 的 host_features 和 VirtIOBlock->host_features 还不太一样
        // 先把 general 的 features 传进去。比如:
        //   - VIRTIO_F_VERSION_1
        //   - VIRTIO_F_BAD_FEATURE
        vdev->host_features = vdc->get_features(vdev, vdev->host_features) //...
            // virtio-blk specific get feature 的方式
            vdc->get_features = virtio_blk_get_features
                virtio_blk_get_features

static uint64_t virtio_blk_get_features(VirtIODevice *vdev, uint64_t features, Error **errp)
{
    VirtIOBlock *s = VIRTIO_BLK(vdev);
    // features: VirtIO 通用的 features
    // host_features: "config-wce", "scsi", "discard", "write-zeroes" 等等可以被配置的 features
    features = features | s->host_features;
    virtio_add_feature(&features, VIRTIO_BLK_F_SEG_MAX);
    virtio_add_feature(&features, VIRTIO_BLK_F_GEOMETRY);
    virtio_add_feature(&features, VIRTIO_BLK_F_TOPOLOGY);
    virtio_add_feature(&features, VIRTIO_BLK_F_BLK_SIZE);
    if (virtio_has_feature(features, VIRTIO_F_VERSION_1)) {
        if (virtio_has_feature(s->host_features, VIRTIO_BLK_F_SCSI)) {
            error_setg(errp, "Please set scsi=off for virtio-blk devices in order to use virtio 1.0");
            return 0;
        }
    } else {
        virtio_clear_feature(&features, VIRTIO_F_ANY_LAYOUT);
        virtio_add_feature(&features, VIRTIO_BLK_F_SCSI);
    }

    if (blk_enable_write_cache(s->blk) ||
        (s->conf.x_enable_wce_if_config_wce &&
         virtio_has_feature(features, VIRTIO_BLK_F_CONFIG_WCE))) {
        virtio_add_feature(&features, VIRTIO_BLK_F_WCE);
    }
    if (!blk_is_writable(s->blk)) {
        virtio_add_feature(&features, VIRTIO_BLK_F_RO);
    }
    if (s->conf.num_queues > 1) {
        virtio_add_feature(&features, VIRTIO_BLK_F_MQ);
    }

    return features;
}

VirtIO-blk driver gets features from device

For legacy approach:

virtio_ioport_read
    switch (addr) {
    case VIRTIO_PCI_HOST_FEATURES:
        ret = vdev->host_features;

For modern approach:

/* Fields in VIRTIO_PCI_CAP_COMMON_CFG: */
struct virtio_pci_common_cfg {
	/* About the whole device. */
	__le32 device_feature_select;	/* read-write */
	__le32 device_feature;		/* read-only */
    //...
};

struct VirtIOPCIProxy {
    //...
    union {
        struct {
            VirtIOPCIRegion common;
            //...
        };
    };
    //...
}

// QEMU (backend)
virtio_pci_device_plugged
    virtio_pci_modern_regions_init
        static const MemoryRegionOps common_ops = {
            .read = virtio_pci_common_read,
            .write = virtio_pci_common_write,
            .impl = {
                .min_access_size = 1,
                .max_access_size = 4,
            },
            .endianness = DEVICE_LITTLE_ENDIAN,
        };
        memory_region_init_io(&proxy->common.mr, &common_ops, ...)
    virtio_pci_modern_mem_region_map(proxy, &proxy->common, &cap);
        // modern_mem_bar_idx 是 bar 4 和 bar 5
        virtio_pci_modern_region_map(proxy, region, cap, &proxy->modern_bar, proxy->modern_mem_bar_idx);
// 其他的内容请移步 virtio_pci_common_read()^ 和 virtio_pci_common_write()^
// Guest driver
// legacy version function
.finalize_features = vp_finalize_features
    vp_finalize_features
        vp_legacy_set_features
// modern version function
.finalize_features = vp_finalize_features
    vp_finalize_features
        vp_modern_set_features

virtio_dev_probe
    // 拿到 device 支持哪些 features
    device_features = dev->config->get_features(dev);
	if (device_features & (1ULL << VIRTIO_F_VERSION_1))
        // 把 device 支持的 features 和 driver 的 features 与一下
		dev->features = driver_features & device_features;
	else
		dev->features = driver_features_legacy & device_features;
    //...
    err = dev->config->finalize_features(dev);
    //...

virtio_pci_common_read() QEMU

当 guest driver 读的时候 backend 这里会调用到这个函数:

//   - DF: Device feature
//   - GF: Guest feature
virtio_pci_common_read
    switch (addr) {
    case VIRTIO_PCI_COMMON_DFSELECT:
        val = proxy->dfselect;
        break;
    case VIRTIO_PCI_COMMON_DF:
        //...
        // 0x0 表示低 32bit,0x1 表示高 32bit
        // 去除掉 legacy features
        val = (vdev->host_features & ~vdc->legacy_features) >> (32 * proxy->dfselect);
        break;
    case VIRTIO_PCI_COMMON_GFSELECT:
        val = proxy->gfselect;
        break;
    case VIRTIO_PCI_COMMON_GF:
        if (proxy->gfselect < ARRAY_SIZE(proxy->guest_features))
            val = proxy->guest_features[proxy->gfselect];
        break;

virtio_pci_common_write() QEMU

可以看到能写的只有 device feature select, guest feature select 以及 guest feature。device feature 是不能写的。

static void virtio_pci_common_write(void *opaque, hwaddr addr, uint64_t val, unsigned size)
{
    VirtIOPCIProxy *proxy = opaque;
    VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);
    uint16_t vector;
    //...
    switch (addr) {
    case VIRTIO_PCI_COMMON_DFSELECT:
        proxy->dfselect = val;
        break;
    case VIRTIO_PCI_COMMON_GFSELECT:
        proxy->gfselect = val;
        break;
    case VIRTIO_PCI_COMMON_GF:
        if (proxy->gfselect < ARRAY_SIZE(proxy->guest_features)) {
            proxy->guest_features[proxy->gfselect] = val;
            virtio_set_features(vdev, (((uint64_t)proxy->guest_features[1]) << 32) | proxy->guest_features[0]);
        }
        break;
    //...
}

vp_modern_get_features() / vp_legacy_get_features() Guest kernel driver

u64 vp_modern_get_features(struct virtio_pci_modern_device *mdev)
{
	struct virtio_pci_common_cfg __iomem *cfg = mdev->common;
	u64 features;
    // The driver uses this to select which feature bits `device_feature` shows.
    // Value 0x0 selects Feature Bits 0 to 31, 0x1 selects Feature Bits 32 to 63, etc.
	vp_iowrite32(0, &cfg->device_feature_select);
    // 读低 32 bit
	features = vp_ioread32(&cfg->device_feature);
    // 读高 32 bit
	vp_iowrite32(1, &cfg->device_feature_select);
    // 把低 32bit 和高 32bit 合并在一起,然后返回。
	features |= ((u64)vp_ioread32(&cfg->device_feature) << 32);
	return features;
}

u64 vp_legacy_get_features(struct virtio_pci_legacy_device *ldev)
{
	return ioread32(ldev->ioaddr + VIRTIO_PCI_HOST_FEATURES);
}

vp_modern_set_features() Guest kernel driver

void vp_modern_set_features(struct virtio_pci_modern_device *mdev, u64 features)
{
	struct virtio_pci_common_cfg __iomem *cfg = mdev->common;

    // 选择写入的是 low 32bits
	vp_iowrite32(0, &cfg->guest_feature_select);
    // 写入 low 32bit features
	vp_iowrite32((u32)features, &cfg->guest_feature);
    // 选择写入的是 high 32bits
	vp_iowrite32(1, &cfg->guest_feature_select);
    // 写入 high 32bit features
	vp_iowrite32(features >> 32, &cfg->guest_feature);
}

A typical process for virtio-blk

  • virtio_driver in guest kernel calls virtblk_add_req(), which calls virtqueue_add_sgs()
__blk_mq_flush_plug_list
virtio_mq_ops->queue_rq()
virtio_queue_rq
virtblk_add_req

  • virtio_driver in guest calls virtqueue_kick_prepare() and virtqueue_notify() in virtio_queue_rq functions. Note that two functions as a whole should be called for a kick. See function virtqueue_kick().
  • Then the back-end device driver will pop the request, handle it, and push result into out buffer (will be explained later, as it is not a part of front-end).
  • When returns to guest, virtblk_done() of virtio_driver is called, which calls virtqueue_get_buf().

Virtio and Vhost Architecture - Part 1 · Better Tomorrow with Computer Science

virtio_blk_handle_output() / virtio_blk_handle_vq() / virtio_blk_handle_request() QEMU

当 virtio-blk 收到了 Guest driver (frontend) 的 notification 后,会调用这个函数。

virtio_queue_notify_vq
    vq->handle_output(vdev, vq);
static void virtio_blk_handle_output(VirtIODevice *vdev, VirtQueue *vq)
{
    VirtIOBlock *s = (VirtIOBlock *)vdev;

    if (s->dataplane && !s->dataplane_started) {
        // 一般来说,guest driver 应该先传 VIRTIO_CONFIG_S_DRIVER_OK 进来,这个时候
        // 我们会启动 ioeventfd,但是 Some guests kick before setting VIRTIO_CONFIG_S_DRIVER_OK so start
        // dataplane here instead of waiting for .set_status().
        virtio_device_start_ioeventfd(vdev);
        if (!s->dataplane_disabled) {
            return;
        }
    }
    virtio_blk_handle_vq(s, vq);
}
void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq)
{
    VirtIOBlockReq *req;
    MultiReqBuffer mrb = {};
    bool suppress_notifications = virtio_queue_get_notification(vq);

    //...
    do {
        // 关闭 notification,因为一次只能有一个 guest 来的 notification
        if (suppress_notifications)
            virtio_queue_set_notification(vq, 0);

        while ((req = virtio_blk_get_request(s, vq))) {
            if (virtio_blk_handle_request(req, &mrb)) {
                virtqueue_detach_element(req->vq, &req->elem, 0);
                virtio_blk_free_request(req);
                break;
            }
        }

        // 打开 notification,因为这个 notification 已经处理完了
        if (suppress_notifications)
            virtio_queue_set_notification(vq, 1);
    } while (!virtio_queue_empty(vq));

    if (mrb.num_reqs) {
        virtio_blk_submit_multireq(s, &mrb);
    }
    //...
}
static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb)
{
    uint32_t type;
    struct iovec *in_iov = req->elem.in_sg;
    struct iovec *out_iov = req->elem.out_sg;
    unsigned in_num = req->elem.in_num;
    unsigned out_num = req->elem.out_num;
    VirtIOBlock *s = req->dev;
    VirtIODevice *vdev = VIRTIO_DEVICE(s);

    // 这个 request 对应的 element (descriptor chain) 至少有一个接收 descriptor 和一个发送 descriptor
    if (req->elem.out_num < 1 || req->elem.in_num < 1) {
        virtio_error(vdev, "virtio-blk missing headers");
        return -1;
    }

    if (unlikely(iov_to_buf(out_iov, out_num, 0, &req->out,
                            sizeof(req->out)) != sizeof(req->out))) {
        virtio_error(vdev, "virtio-blk request outhdr too short");
        return -1;
    }

    iov_discard_front_undoable(&out_iov, &out_num, sizeof(req->out),
                               &req->outhdr_undo);

    if (in_iov[in_num - 1].iov_len < sizeof(struct virtio_blk_inhdr)) {
        virtio_error(vdev, "virtio-blk request inhdr too short");
        iov_discard_undo(&req->outhdr_undo);
        return -1;
    }

    /* We always touch the last byte, so just see how big in_iov is.  */
    req->in_len = iov_size(in_iov, in_num);
    req->in = (void *)in_iov[in_num - 1].iov_base
              + in_iov[in_num - 1].iov_len
              - sizeof(struct virtio_blk_inhdr);
    iov_discard_back_undoable(in_iov, &in_num, sizeof(struct virtio_blk_inhdr),
                              &req->inhdr_undo);

    type = virtio_ldl_p(vdev, &req->out.type);

    /* VIRTIO_BLK_T_OUT defines the command direction. VIRTIO_BLK_T_BARRIER
     * is an optional flag. Although a guest should not send this flag if
     * not negotiated we ignored it in the past. So keep ignoring it. */
    switch (type & ~(VIRTIO_BLK_T_OUT | VIRTIO_BLK_T_BARRIER)) {
    case VIRTIO_BLK_T_IN:
    {
        bool is_write = type & VIRTIO_BLK_T_OUT;
        req->sector_num = virtio_ldq_p(vdev, &req->out.sector);

        if (is_write) {
            qemu_iovec_init_external(&req->qiov, out_iov, out_num);
            trace_virtio_blk_handle_write(vdev, req, req->sector_num,
                                          req->qiov.size / BDRV_SECTOR_SIZE);
        } else {
            qemu_iovec_init_external(&req->qiov, in_iov, in_num);
            trace_virtio_blk_handle_read(vdev, req, req->sector_num,
                                         req->qiov.size / BDRV_SECTOR_SIZE);
        }

        if (!virtio_blk_sect_range_ok(s, req->sector_num, req->qiov.size)) {
            virtio_blk_req_complete(req, VIRTIO_BLK_S_IOERR);
            block_acct_invalid(blk_get_stats(s->blk),
                               is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ);
            virtio_blk_free_request(req);
            return 0;
        }

        block_acct_start(blk_get_stats(s->blk), &req->acct, req->qiov.size,
                         is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ);

        /* merge would exceed maximum number of requests or IO direction
         * changes */
        if (mrb->num_reqs > 0 && (mrb->num_reqs == VIRTIO_BLK_MAX_MERGE_REQS ||
                                  is_write != mrb->is_write ||
                                  !s->conf.request_merging)) {
            virtio_blk_submit_multireq(s, mrb);
        }

        assert(mrb->num_reqs < VIRTIO_BLK_MAX_MERGE_REQS);
        mrb->reqs[mrb->num_reqs++] = req;
        mrb->is_write = is_write;
        break;
    }
    case VIRTIO_BLK_T_FLUSH:
        virtio_blk_handle_flush(req, mrb);
        break;
    case VIRTIO_BLK_T_ZONE_REPORT:
        virtio_blk_handle_zone_report(req, in_iov, in_num);
        break;
    case VIRTIO_BLK_T_ZONE_OPEN:
        virtio_blk_handle_zone_mgmt(req, BLK_ZO_OPEN);
        break;
    case VIRTIO_BLK_T_ZONE_CLOSE:
        virtio_blk_handle_zone_mgmt(req, BLK_ZO_CLOSE);
        break;
    case VIRTIO_BLK_T_ZONE_FINISH:
        virtio_blk_handle_zone_mgmt(req, BLK_ZO_FINISH);
        break;
    case VIRTIO_BLK_T_ZONE_RESET:
        virtio_blk_handle_zone_mgmt(req, BLK_ZO_RESET);
        break;
    case VIRTIO_BLK_T_ZONE_RESET_ALL:
        virtio_blk_handle_zone_mgmt(req, BLK_ZO_RESET);
        break;
    case VIRTIO_BLK_T_SCSI_CMD:
        virtio_blk_handle_scsi(req);
        break;
    case VIRTIO_BLK_T_GET_ID:
    {
        /*
         * NB: per existing s/n string convention the string is
         * terminated by '\0' only when shorter than buffer.
         */
        const char *serial = s->conf.serial ? s->conf.serial : "";
        size_t size = MIN(strlen(serial) + 1,
                          MIN(iov_size(in_iov, in_num),
                              VIRTIO_BLK_ID_BYTES));
        iov_from_buf(in_iov, in_num, 0, serial, size);
        virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
        virtio_blk_free_request(req);
        break;
    }
    case VIRTIO_BLK_T_ZONE_APPEND & ~VIRTIO_BLK_T_OUT:
        /*
         * Passing out_iov/out_num and in_iov/in_num is not safe
         * to access req->elem.out_sg directly because it may be
         * modified by virtio_blk_handle_request().
         */
        virtio_blk_handle_zone_append(req, out_iov, in_iov, out_num, in_num);
        break;
    /*
     * VIRTIO_BLK_T_DISCARD and VIRTIO_BLK_T_WRITE_ZEROES are defined with
     * VIRTIO_BLK_T_OUT flag set. We masked this flag in the switch statement,
     * so we must mask it for these requests, then we will check if it is set.
     */
    case VIRTIO_BLK_T_DISCARD & ~VIRTIO_BLK_T_OUT:
    case VIRTIO_BLK_T_WRITE_ZEROES & ~VIRTIO_BLK_T_OUT:
    {
        struct virtio_blk_discard_write_zeroes dwz_hdr;
        size_t out_len = iov_size(out_iov, out_num);
        bool is_write_zeroes = (type & ~VIRTIO_BLK_T_BARRIER) ==
                               VIRTIO_BLK_T_WRITE_ZEROES;
        uint8_t err_status;

        /*
         * Unsupported if VIRTIO_BLK_T_OUT is not set or the request contains
         * more than one segment.
         */
        if (unlikely(!(type & VIRTIO_BLK_T_OUT) ||
                     out_len > sizeof(dwz_hdr))) {
            virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP);
            virtio_blk_free_request(req);
            return 0;
        }

        if (unlikely(iov_to_buf(out_iov, out_num, 0, &dwz_hdr,
                                sizeof(dwz_hdr)) != sizeof(dwz_hdr))) {
            iov_discard_undo(&req->inhdr_undo);
            iov_discard_undo(&req->outhdr_undo);
            virtio_error(vdev, "virtio-blk discard/write_zeroes header"
                         " too short");
            return -1;
        }

        err_status = virtio_blk_handle_discard_write_zeroes(req, &dwz_hdr,
                                                            is_write_zeroes);
        if (err_status != VIRTIO_BLK_S_OK) {
            virtio_blk_req_complete(req, err_status);
            virtio_blk_free_request(req);
        }

        break;
    }
    default:
        virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP);
        virtio_blk_free_request(req);
    }
    return 0;
}

virtio_blk_get_request() QEMU

Get 一个 descriptor chain 出来。

static VirtIOBlockReq *virtio_blk_get_request(VirtIOBlock *s, VirtQueue *vq)
{
    VirtIOBlockReq *req = virtqueue_pop(vq, sizeof(VirtIOBlockReq));

    if (req) {
        virtio_blk_init_request(s, vq, req);
    }
    return req;
}

Virtio-blk Initialization Process

Guest kernel:

static int __init virtio_blk_init(void)
{
    // 这个不是 virtqueue
	virtblk_wq = alloc_workqueue("virtio-blk", 0, 0);
    //...
	major = register_blkdev(0, "virtblk");
    //...
	register_virtio_driver(&virtio_blk);
    //...
}
module_init(virtio_blk_init);
module_exit(virtio_blk_fini);

// 这是一个 kernel module,会注册 virtio_bus 这个 bus 上去
virtio_init
    bus_register(&virtio_bus)

// 此时一个 device attach 上来了
virtio_pci_probe
    register_virtio_device
        device_add
            bus_probe_device
                device_initial_probe
                    __device_attach
                        // 检查所有的 driver,看看能不能 match
                        bus_for_each_drv(dev->bus, NULL, &data, __device_attach_driver);
                            driver_match_device
                                drv->bus->match(dev, drv)
                                    virtio_dev_match

// virtblk 的驱动开始 probe device
virtblk_probe
    init_vq
        virtio_find_vqs
            vp_modern_find_vqs
                vp_find_vqs
                    vp_find_vqs_msix
                        for (i = 0; i < nvqs; ++i) { // common
                            vqs[i] = vp_setup_vq(vdev, queue_idx++, callbacks[i], names[i], ctx ? ctx[i] : false, msix_vec);
                                vp_dev->setup_vq
                                    // 注意,这个函数有 modern 和 legacy 两个版本,在两个不同文件中
                                    // 我们这里以 legacy 方式为例了
                                    setup_vq
                                        vring_create_virtqueue
                                        // 告诉 host virtqueue 的 pfn 是什么
                                        // 也就完成了 memory 的 mapping
                                        vp_legacy_set_queue_address
                                            iowrite16(index, ldev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
                                        	iowrite32(queue_pfn, ldev->ioaddr + VIRTIO_PCI_QUEUE_PFN);

// virtio device
  • 注册 virtio bus
  • 注册 virtio 设备驱动,比如 virtio net driver 或者 virtio blk driver
  • 注册 virtio 设备
  • 触发驱动核心 .match 操作 virtio_dev_match()
  • match 成功后驱动调用 probe 函数(virtnet_probe/virtblk_probe)探测 virtio 设备
static struct bus_type virtio_bus = {
	.name  = "virtio",
	.match = virtio_dev_match,
	.dev_groups = virtio_dev_groups,
	.uevent = virtio_uevent,
	.probe = virtio_dev_probe,
	.remove = virtio_dev_remove,
};

register_blkdev() Guest kernel driver


// probe 函数是 NULL
#define register_blkdev(major, name) \
	__register_blkdev(major, name, NULL)

int __register_blkdev(unsigned int major, const char *name,
		void (*probe)(dev_t devt))
{
	struct blk_major_name **n, *p;
	int index, ret = 0;

    //...
	/* temporary */
	if (major == 0) {
		for (index = ARRAY_SIZE(major_names)-1; index > 0; index--) {
			if (major_names[index] == NULL)
				break;
		}

		if (index == 0) {
			printk("%s: failed to get major for %s\n",
			       __func__, name);
			ret = -EBUSY;
			goto out;
		}
		major = index;
		ret = major;
	}

	if (major >= BLKDEV_MAJOR_MAX) {
		pr_err("%s: major requested (%u) is greater than the maximum (%u) for %s\n",
		       __func__, major, BLKDEV_MAJOR_MAX-1, name);

		ret = -EINVAL;
		goto out;
	}

	p = kmalloc(sizeof(struct blk_major_name), GFP_KERNEL);
	if (p == NULL) {
		ret = -ENOMEM;
		goto out;
	}

	p->major = major;
#ifdef CONFIG_BLOCK_LEGACY_AUTOLOAD
	p->probe = probe;
#endif
	strscpy(p->name, name, sizeof(p->name));
	p->next = NULL;
	index = major_to_index(major);

	spin_lock(&major_names_spinlock);
	for (n = &major_names[index]; *n; n = &(*n)->next) {
		if ((*n)->major == major)
			break;
	}
	if (!*n)
		*n = p;
	else
		ret = -EBUSY;
	spin_unlock(&major_names_spinlock);

	if (ret < 0) {
		printk("register_blkdev: cannot get major %u for %s\n",
		       major, name);
		kfree(p);
	}
out:
	mutex_unlock(&major_names_lock);
	return ret;
}

init_vq() Guest kernel driver

虽然名字叫做 init_vq(),但是其实是 virtio-blk driver 专用的一个函数。

// 可见如果一旦发现了新的 virtblk 设备,那么我们就
// 要调用 init_vq() 进行初始化。
static struct virtio_driver virtio_blk = {
    //...
	.probe				= virtblk_probe,
};
virtblk_probe
    init_vq
static int init_vq(struct virtio_blk *vblk)
{
	int err;
	int i;
	vq_callback_t **callbacks;
	const char **names;
	struct virtqueue **vqs;
	unsigned short num_vqs;
	unsigned int num_poll_vqs;
	struct virtio_device *vdev = vblk->vdev;
	struct irq_affinity desc = { 0, };

    // 看一下 multiqueue feature 有没有打开?
	err = virtio_cread_feature(vdev, VIRTIO_BLK_F_MQ,
				   struct virtio_blk_config, num_queues,
				   &num_vqs);

    //...
	num_vqs = min_t(unsigned int,
			min_not_zero(num_request_queues, nr_cpu_ids),
			num_vqs);

	num_poll_vqs = min_t(unsigned int, poll_queues, num_vqs - 1);

	vblk->io_queues[HCTX_TYPE_DEFAULT] = num_vqs - num_poll_vqs;
	vblk->io_queues[HCTX_TYPE_READ] = 0;
	vblk->io_queues[HCTX_TYPE_POLL] = num_poll_vqs;

	dev_info(&vdev->dev, "%d/%d/%d default/read/poll queues\n",
				vblk->io_queues[HCTX_TYPE_DEFAULT],
				vblk->io_queues[HCTX_TYPE_READ],
				vblk->io_queues[HCTX_TYPE_POLL]);

    // guest driver 在这里分配内存给 virtqueue,类型是 virtio_blk_vq
    // 而且是一次分配一个 array,根据 num_vqs 的数量来分配
	vblk->vqs = kmalloc_array(num_vqs, sizeof(*vblk->vqs), GFP_KERNEL);
    //...

	names = kmalloc_array(num_vqs, sizeof(*names), GFP_KERNEL);
	callbacks = kmalloc_array(num_vqs, sizeof(*callbacks), GFP_KERNEL);
    // 这个和 vblk->vqs 的区别是这个类型是 virtqueue
    // 而 vblk->vqs 的类型是 struct virtio_blk_vq *vqs;
    // 我们只是分配了关于 virtqueue 的结构体,但是这应该还不代表我们
    // 为每一个 virtqueue 分配了共享内存空间。
	vqs = kmalloc_array(num_vqs, sizeof(*vqs), GFP_KERNEL);
    //...
    // 只有前 num_vqs - num_poll_vqs 个需要有 callback function
	for (i = 0; i < num_vqs - num_poll_vqs; i++) {
		callbacks[i] = virtblk_done;
        //...
		names[i] = vblk->vqs[i].name;
	}

	for (; i < num_vqs; i++) {
		callbacks[i] = NULL;
        //...
		names[i] = vblk->vqs[i].name;
	}

	/* Discover virtqueues and write information to configuration.  */
	err = virtio_find_vqs(vdev, num_vqs, vqs, callbacks, names, &desc);
    //...

	for (i = 0; i < num_vqs; i++) {
        // 这里把我们这个临时变量里的值赋予 vblk->vqs 里的对应部分
		vblk->vqs[i].vq = vqs[i];
	}
	vblk->num_vqs = num_vqs;
    //...
}

VirtIO BLK Live Migration

const VMStateInfo virtio_vmstate_info = {
    .name = "virtio",
    .get = virtio_device_get,
    .put = virtio_device_put,
};

// 可以看到这个宏定义展开就是是用来上面的 virtio_vmstate_info
#define VMSTATE_VIRTIO_DEVICE \
    {                                         \
        .name = "virtio",                     \
        .info = &virtio_vmstate_info,         \
        .flags = VMS_SINGLE,                  \
    }

// 可以看到 virtio_blk, virtioinput, virtio_mem 都使用了这个宏定义。
// 也就是说它们的 get, put 函数都是相同Definitely,都是 virtio_device_get, virtio_device_put
// 它们在这个函数里面有各自的钩子函数。
static const VMStateDescription vmstate_virtio_blk = {
    //...
    .fields = (const VMStateField[]) {
        VMSTATE_VIRTIO_DEVICE,
    },
};
static const VMStateDescription vmstate_virtio_input = {
    //...
    .fields = (const VMStateField[]) {
        VMSTATE_VIRTIO_DEVICE,
    },
};
static const VMStateDescription vmstate_virtio_mem = {
    //...
    .fields = (const VMStateField[]) {
        VMSTATE_VIRTIO_DEVICE,
    },
};

通过以下方式注册到 savevm_state.handlers 里的:

device_set_realized
    if (qdev_get_vmsd(dev))
        vmstate_register_with_alias_id

desc, avail, used 都需要迁过去。

没有设置 early_setup,所以应该是在 RAM 的数据(precopy 阶段)发完之后才会发。

static const VMStateDescription vmstate_virtio_blk = {
    .name = "virtio-blk",
    .minimum_version_id = 2,
    .version_id = 2,
    .fields = (VMStateField[]) {
        // 可以看到 virtio-blk 并没有什么需要特殊发过去的东西
        // 对于 blk 的特殊的处理都在 callback 函数里处理了。
        VMSTATE_VIRTIO_DEVICE,
        VMSTATE_END_OF_LIST()
    },
};

static void virtio_blk_class_init(ObjectClass *klass, void *data)
{
    DeviceClass *dc = DEVICE_CLASS(klass);
    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);

    // virtio-blk 重写了这些函数
    // 这些函数不只是为了迁移用的,其他情况也会用到(比如 start_ioeventfd())
    device_class_set_props(dc, virtio_blk_properties);
    dc->vmsd = &vmstate_virtio_blk;
    set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);


    vdc->realize = virtio_blk_device_realize;
    vdc->unrealize = virtio_blk_device_unrealize;
    vdc->get_config = virtio_blk_update_config;
    vdc->set_config = virtio_blk_set_config;
    vdc->get_features = virtio_blk_get_features;
    vdc->set_status = virtio_blk_set_status;
    vdc->reset = virtio_blk_reset;
    vdc->save = virtio_blk_save_device;
    vdc->load = virtio_blk_load_device;
    vdc->start_ioeventfd = virtio_blk_data_plane_start;
    vdc->stop_ioeventfd = virtio_blk_data_plane_stop;
}

.save() / virtio_blk_save_device() QEMU

把当前仍然有的每一个 request 发送过去。

static void virtio_blk_save_device(VirtIODevice *vdev, QEMUFile *f)
{
    VirtIOBlock *s = VIRTIO_BLK(vdev);
    WITH_QEMU_LOCK_GUARD(&s->rq_lock) {
        VirtIOBlockReq *req = s->rq;
        while (req) {
            //...
            // 如果我们配置了多个 virtqueue,那么我们需要
            // 把这个 request 所在 virtqueue 的 index 发送过去
            if (s->conf.num_queues > 1)
                qemu_put_be32(f, virtio_get_queue_index(req->vq));
            // 一个 element 和一个 descriptor chain 对应,
            // 我们需要把这个 request 对应的 descriptor chain 本身发送过去。
            qemu_put_virtqueue_element(vdev, f, &req->elem);
            // next 是啥?
            req = req->next;
        }
    }
    //...
}

.load() / virtio_blk_load_device() QEMU

看起来这个 next 的顺序和 save 的时候是相反的?save 是从 head 往 tail save 的,但是 load 的时候发现最后 head 成了最后一个收到的 request,这样合理吗?

static int virtio_blk_load_device(VirtIODevice *vdev, QEMUFile *f, int version_id)
{
    VirtIOBlock *s = VIRTIO_BLK(vdev);

    while (qemu_get_sbyte(f)) {
        unsigned nvqs = s->conf.num_queues;
        unsigned vq_idx = 0;
        VirtIOBlockReq *req;

        if (nvqs > 1)
            // 拿到这个 request 所对应 virtqueue 的 index。
            vq_idx = qemu_get_be32(f);
            //...

        // 获取对应的 request
        req = qemu_get_virtqueue_element(vdev, f, sizeof(VirtIOBlockReq));
        virtio_blk_init_request(s, virtio_get_queue(vdev, vq_idx), req);
        req->next = s->rq;
        s->rq = req;
    }

    return 0;
}

VirtIO BLK Initialization Process in QEMU

static const TypeInfo virtio_blk_info = {
    .name = TYPE_VIRTIO_BLK,
    // 父类型是 virtio device
    .parent = TYPE_VIRTIO_DEVICE,
    .instance_size = sizeof(VirtIOBlock),
    .instance_init = virtio_blk_instance_init,
    .class_init = virtio_blk_class_init,
};

// 初始化这个 class 的一个全局的函数
static void virtio_blk_class_init(ObjectClass *klass, void *data)
{
    DeviceClass *dc = DEVICE_CLASS(klass);
    VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);

    device_class_set_props(dc, virtio_blk_properties);
    // vmsd 是和一个 Class 相关的,而不是和一个 Object 相关的
    dc->vmsd = &vmstate_virtio_blk;
    set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
    // 重写了 virtio(父类)的 realize 函数
    vdc->realize = virtio_blk_device_realize;
    // 重写了 virtio(父类)的 unrealize 函数
    vdc->unrealize = virtio_blk_device_unrealize;
    vdc->get_config = virtio_blk_update_config;
    vdc->set_config = virtio_blk_set_config;
    vdc->get_features = virtio_blk_get_features;
    vdc->set_status = virtio_blk_set_status;
    vdc->reset = virtio_blk_reset;
    // virtio device 特有的迁移用的钩子
    vdc->save = virtio_blk_save_device;
    vdc->load = virtio_blk_load_device;
    vdc->start_ioeventfd = virtio_blk_data_plane_start;
    vdc->stop_ioeventfd = virtio_blk_data_plane_stop;
}

static void virtio_blk_instance_init(Object *obj)
{
    VirtIOBlock *s = VIRTIO_BLK(obj);

    device_add_bootindex_property(obj, &s->conf.conf.bootindex,
                                  "bootindex", "/disk@0,0",
                                  DEVICE(obj));
}

VirtIO BLK Device Implementation in QEMU

Difference between file hw/block/virtio-blk.c and hw/block/vhost-user-blk.c

vhost-user-blk 是一个对于 virtio-blk 的升级。就像 vhost-user-scsi 是一个对于 virtio-scsi 的升级。

struct VirtIOBlock QEMU

struct VirtIOBlock {
    VirtIODevice parent_obj;
    BlockBackend *blk;
    QemuMutex rq_lock;
    // 
    void *rq; /* protected by rq_lock */
    VirtIOBlkConf conf;
    unsigned short sector_mask;
    bool original_wce;
    VMChangeStateEntry *change;
    bool dataplane_disabled;
    bool dataplane_started;
    struct VirtIOBlockDataPlane *dataplane;
    uint64_t host_features;
    size_t config_size;
    BlockRAMRegistrar blk_ram_registrar;
};

struct VirtIOBlockReq QEMU

这个结构体的内存布局好像可以和 virtqueue_pop() 的输出(void *)进行对应,详见 virtqueue_pop()^。

typedef struct VirtIOBlockReq {
    // 一个 request 对应一个 element,或者说一个 descriptor chain,这很合理
    // elem 里面有指针指向一些数组,这个数组的内存区域位于 VirtIOBlockReq 的后面。
    VirtQueueElement elem;
    int64_t sector_num;
    VirtIOBlock *dev;
    VirtQueue *vq;
    IOVDiscardUndo inhdr_undo;
    IOVDiscardUndo outhdr_undo;
    struct virtio_blk_inhdr *in;
    struct virtio_blk_outhdr out;
    QEMUIOVector qiov;
    size_t in_len;
    // 
    struct VirtIOBlockReq *next;
    struct VirtIOBlockReq *mr_next;
    BlockAcctCookie acct;
} VirtIOBlockReq;

Vhost-user-blk

[Qemu-devel] [PATCH v10 0/4] Introduce a new vhost-user-blk host device to QEMU - Changpeng Liu

FOSDEM 2023 - vhost-user-blk: a fast userspace block I/O interface

Exporting test.img at vhost-user-blk.sock.

qemu-storage-daemon \
 –blockdev file,filename=test.img,node-name=file0 \
 –export vhost-user-blk,node-name=file0,addr.type=unix,addr.path=vhost-user-blk.sock,writable=on

QEMU 也是用了 libblkio 库来提供 virtio blk 的功能:

// 这是 libblkio 提供的 API
struct blkio *b;
blkio_create("virtio-blk-vhost-user", &b);
blkio_set_str(b, "path", "vhost-user-blk.sock");
blkio_connect(b);
blkio_start(b);
struct blkioq *q = blkio_get_queue(b, 0);
blkioq_read(q, 0x10000, buf, buf_size, NULL, 0);
struct blkio_completion c;
ret = blkioq_do_io(q, &c, 1, 1, NULL);
if (ret != 1 || c.ret != 0) ...

// QEMU 里的 block/blkio.c
blk_new_open
    bdrv_open_inherit
        bdrv_open_driver
            blkio_file_open
                blkio_create
                blkio_connect
                //...

VirtIO BLK Driver Implementation in Guest Kernel