[dpdk-dev] virtio: add new driver for crypto devices

Message ID 1510938620-15268-1-git-send-email-jianjay.zhou@huawei.com (mailing list archive)
State Superseded, archived
Delegated to: Pablo de Lara Guarch
Headers

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation success Compilation OK

Commit Message

Zhoujian (jay) Nov. 17, 2017, 5:10 p.m. UTC
  This patch series add a PMD for the virtio-crypto devices.

This patch series support a limited subset of crypto services.
Currently supporting symmetric algorithms(cipher only or chained algorithms),
and only the following algorithms are tested(because the backend of the virtio-crypto
only supports cipher only now):

Cipher algorithms:
  - RTE_CRYPTO_CIPHER_AES_CBC (with 128-bit, 192-bit and 256-bit keys supported)
  - RTE_CRYPTO_CIPHER_AES_CTR (with 128-bit, 192-bit and 256-bit keys supported)

Firstly build QEMU with libgcrypt cryptography support.
QEMU can then be started using the following parameters:
qemu-system-x86_64 \
    [...] \
        -object cryptodev-backend-builtin,id=cryptodev0 \
        -device virtio-crypto-pci,id=crypto0,cryptodev=cryptodev0 \
    [...]

Follow the steps listed in the link below

https://www.kernel.org/doc/html/v4.12/driver-api/uio-howto.html

to bind the uio_generic driver for the virtio-crypto device.
For example, 0000:00:04.0 is the domain, bus, device and function
number of the virtio-crypto device,
    modprobe uio_pci_generic
    echo -n 0000:00:04.0 > /sys/bus/pci/drivers/virtio-pci/unbind
    echo "1af4 1054" > /sys/bus/pci/drivers/uio_pci_generic/new_id

The front-end virtio crypto PMD driver can be installed:
    cd to the top-level DPDK directory
    sed -i 's,\(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO\)=n,\1=y,' config/common_base
    make config T=x86_64-native-linuxapp-gcc
    make install T=x86_64-native-linuxapp-gcc

The test cases can be compiled as below:
    cd to the top-level DPDK directory
    export RTE_TARGET=x86_64-native-linuxapp-gcc
    export RTE_SDK=`pwd`
    cd to test/test
    make
    ./test (MUST reserve huge pages memory)
    type the command "cryptodev_virtio_autotest" to test

The result should be like this:
RTE>>cryptodev_virtio_autotest
 + ------------------------------------------------------- +
 + Test Suite : Crypto VIRTIO Unit Test Suite
 + ------------------------------------------------------- +
  0) TestCase AES-128-CBC Encryption PASS
  1) TestCase AES-128-CBC Decryption PASS
  2) TestCase AES-192-CBC Encryption PASS
  3) TestCase AES-192-CBC Decryption PASS
  4) TestCase AES-256-CBC Encryption PASS
  5) TestCase AES-256-CBC Decryption PASS
  6) TestCase AES-256-CBC OOP Encryption PASS
  7) TestCase AES-256-CBC OOP Decryption PASS
  8) TestCase AES-128-CTR Encryption PASS
  9) TestCase AES-128-CTR Decryption PASS
  10) TestCase AES-192-CTR Encryption PASS
  11) TestCase AES-192-CTR Decryption PASS
  12) TestCase AES-256-CTR Encryption PASS
  13) TestCase AES-256-CTR Decryption PASS
 + TestCase [ 0] : test_AES_cipheronly_virtio_all succeeded
 + ------------------------------------------------------- +
 + Test Suite Summary
 + Tests Total :        1
 + Tests Skipped :      0
 + Tests Executed :     1
 + Tests Unsupported:   0
 + Tests Passed :       1
 + Tests Failed :       0
 + ------------------------------------------------------- +
Test OK

TODO:
 - Only some function tests (chained algorithms and performance
   tests are not supported)
 - Only supports session-oriented API implementation(session-less
   APIs are not supported)
 - Hash only is not supported (depends on the backend of virtio-crypto)

Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
---
 MAINTAINERS                                 |    4 +
 config/common_base                          |   20 +
 drivers/crypto/Makefile                     |    1 +
 drivers/crypto/virtio/Makefile              |   59 +
 drivers/crypto/virtio/virtio_crypto.h       |  452 ++++++++
 drivers/crypto/virtio/virtio_crypto_algs.h  |   56 +
 drivers/crypto/virtio/virtio_cryptodev.c    | 1542 +++++++++++++++++++++++++++
 drivers/crypto/virtio/virtio_cryptodev.h    |   87 ++
 drivers/crypto/virtio/virtio_logs.h         |   76 ++
 drivers/crypto/virtio/virtio_pci.c          |  714 +++++++++++++
 drivers/crypto/virtio/virtio_pci.h          |  286 +++++
 drivers/crypto/virtio/virtio_ring.h         |  169 +++
 drivers/crypto/virtio/virtio_rxtx.c         |  569 ++++++++++
 drivers/crypto/virtio/virtqueue.c           |   80 ++
 drivers/crypto/virtio/virtqueue.h           |  208 ++++
 mk/rte.app.mk                               |    1 +
 test/test/test_cryptodev.c                  |   49 +
 test/test/test_cryptodev.h                  |    1 +
 test/test/test_cryptodev_aes_test_vectors.h |   45 +-
 test/test/test_cryptodev_blockcipher.c      |    9 +-
 test/test/test_cryptodev_blockcipher.h      |    1 +
 21 files changed, 4413 insertions(+), 16 deletions(-)
 create mode 100644 drivers/crypto/virtio/Makefile
 create mode 100644 drivers/crypto/virtio/virtio_crypto.h
 create mode 100644 drivers/crypto/virtio/virtio_crypto_algs.h
 create mode 100644 drivers/crypto/virtio/virtio_cryptodev.c
 create mode 100644 drivers/crypto/virtio/virtio_cryptodev.h
 create mode 100644 drivers/crypto/virtio/virtio_logs.h
 create mode 100644 drivers/crypto/virtio/virtio_pci.c
 create mode 100644 drivers/crypto/virtio/virtio_pci.h
 create mode 100644 drivers/crypto/virtio/virtio_ring.h
 create mode 100644 drivers/crypto/virtio/virtio_rxtx.c
 create mode 100644 drivers/crypto/virtio/virtqueue.c
 create mode 100644 drivers/crypto/virtio/virtqueue.h
  

Comments

Fan Zhang Nov. 27, 2017, 4:47 p.m. UTC | #1
Hi Jay,

Thanks for contributing to DPDK.

The code has been tested and works fine. 

A few comments:

1. Could you split the patch into a patchset, as suggested in contribution guide in http://dpdk.org/doc/guides/contributing/patches.html, section 5.4?
2. Please update doc/guides/cryptodevs for describing your virtio crypto PMD.
3. Please update doc/guides/rel_notes/release_18.02. 
4. One more comment inline

Regards,
Fan

> -----Original Message-----
> From: Jay Zhou [mailto:jianjay.zhou@huawei.com]
> Sent: Friday, November 17, 2017 5:10 PM
> To: dev@dpdk.org
> Cc: yliu@fridaylinux.org; maxime.coquelin@redhat.com;
> arei.gonglei@huawei.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Zeng,
> Xin <xin.zeng@intel.com>; weidong.huang@huawei.com;
> wangxinxin.wang@huawei.com; longpeng2@huawei.com;
> jianjay.zhou@huawei.com
> Subject: [PATCH] virtio: add new driver for crypto devices
> +	/*
> +	 * malloc memory to store indirect vring_desc entries, including
> +	 * ctrl request, cipher key, auth key, session input and desc vring
> +	 */
> +	desc_offset = ctrl_req_length + cipher_keylen + auth_keylen
> +		+ input_length;

Instead of using rte_malloc() as below, you could pre-allocate a mempool and use
rte_mempool_get() or rte_mempool_get_bulk() to get these memory to store descriptors.
You can use rte_mempool_virt2iova() to obtain the physical address of this memory. This shall
have better performance.

> +	virt_addr_started = rte_malloc(NULL,
> +		desc_offset +
> NUM_ENTRY_VIRTIO_CRYPTO_SYM_CREATE_SESSION
> +			* sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
> +	if (virt_addr_started == NULL) {
> +		PMD_SESSION_LOG(ERR, "not enough heap memory");
> +		return -ENOSPC;
> +	}
> +	phys_addr_started = rte_malloc_virt2phy(virt_addr_started);
  
Zhoujian (jay) Nov. 28, 2017, 1:27 a.m. UTC | #2
Hi Fan,

On 2017/11/28 0:47, Zhang, Roy Fan wrote:
> Hi Jay,
>
> Thanks for contributing to DPDK.
>
> The code has been tested and works fine.
>
> A few comments:
>
> 1. Could you split the patch into a patchset, as suggested in contribution guide in http://dpdk.org/doc/guides/contributing/patches.html, section 5.4?
> 2. Please update doc/guides/cryptodevs for describing your virtio crypto PMD.
> 3. Please update doc/guides/rel_notes/release_18.02.
> 4. One more comment inline
>

For DPDK, I'm a newbie. Thanks for testing and pointing these steps
out, will fix them in V2.

>
>> -----Original Message-----
>> From: Jay Zhou [mailto:jianjay.zhou@huawei.com]
>> Sent: Friday, November 17, 2017 5:10 PM
>> To: dev@dpdk.org
>> Cc: yliu@fridaylinux.org; maxime.coquelin@redhat.com;
>> arei.gonglei@huawei.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Zeng,
>> Xin <xin.zeng@intel.com>; weidong.huang@huawei.com;
>> wangxinxin.wang@huawei.com; longpeng2@huawei.com;
>> jianjay.zhou@huawei.com
>> Subject: [PATCH] virtio: add new driver for crypto devices
>> +	/*
>> +	 * malloc memory to store indirect vring_desc entries, including
>> +	 * ctrl request, cipher key, auth key, session input and desc vring
>> +	 */
>> +	desc_offset = ctrl_req_length + cipher_keylen + auth_keylen
>> +		+ input_length;
>
> Instead of using rte_malloc() as below, you could pre-allocate a mempool and use
> rte_mempool_get() or rte_mempool_get_bulk() to get these memory to store descriptors.
> You can use rte_mempool_virt2iova() to obtain the physical address of this memory. This shall
> have better performance.

I will have a try.

Regards,
Jay

>
>> +	virt_addr_started = rte_malloc(NULL,
>> +		desc_offset +
>> NUM_ENTRY_VIRTIO_CRYPTO_SYM_CREATE_SESSION
>> +			* sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
>> +	if (virt_addr_started == NULL) {
>> +		PMD_SESSION_LOG(ERR, "not enough heap memory");
>> +		return -ENOSPC;
>> +	}
>> +	phys_addr_started = rte_malloc_virt2phy(virt_addr_started);
>
> .
>
  
Fan Zhang Nov. 29, 2017, 10:14 a.m. UTC | #3
Hi Jay,

No worries. Welcome to DPDK :-D

Regards,
Fan

> -----Original Message-----
> From: Jay Zhou [mailto:jianjay.zhou@huawei.com]
> Sent: Tuesday, November 28, 2017 1:27 AM
> To: Zhang, Roy Fan <roy.fan.zhang@intel.com>; dev@dpdk.org
> Cc: yliu@fridaylinux.org; maxime.coquelin@redhat.com;
> arei.gonglei@huawei.com; Zeng, Xin <xin.zeng@intel.com>;
> weidong.huang@huawei.com; wangxinxin.wang@huawei.com;
> longpeng2@huawei.com
> Subject: Re: [PATCH] virtio: add new driver for crypto devices
> 
> Hi Fan,
> 
> On 2017/11/28 0:47, Zhang, Roy Fan wrote:
> > Hi Jay,
> >
> > Thanks for contributing to DPDK.
> >
> > The code has been tested and works fine.
> >
> > A few comments:
> >
> > 1. Could you split the patch into a patchset, as suggested in contribution
> guide in http://dpdk.org/doc/guides/contributing/patches.html, section 5.4?
> > 2. Please update doc/guides/cryptodevs for describing your virtio crypto
> PMD.
> > 3. Please update doc/guides/rel_notes/release_18.02.
> > 4. One more comment inline
> >
> 
> For DPDK, I'm a newbie. Thanks for testing and pointing these steps out, will
> fix them in V2.
> 
> >
> >> -----Original Message-----
> >> From: Jay Zhou [mailto:jianjay.zhou@huawei.com]
> >> Sent: Friday, November 17, 2017 5:10 PM
> >> To: dev@dpdk.org
> >> Cc: yliu@fridaylinux.org; maxime.coquelin@redhat.com;
> >> arei.gonglei@huawei.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> >> Zeng, Xin <xin.zeng@intel.com>; weidong.huang@huawei.com;
> >> wangxinxin.wang@huawei.com; longpeng2@huawei.com;
> >> jianjay.zhou@huawei.com
> >> Subject: [PATCH] virtio: add new driver for crypto devices
> >> +	/*
> >> +	 * malloc memory to store indirect vring_desc entries, including
> >> +	 * ctrl request, cipher key, auth key, session input and desc vring
> >> +	 */
> >> +	desc_offset = ctrl_req_length + cipher_keylen + auth_keylen
> >> +		+ input_length;
> >
> > Instead of using rte_malloc() as below, you could pre-allocate a
> > mempool and use
> > rte_mempool_get() or rte_mempool_get_bulk() to get these memory to
> store descriptors.
> > You can use rte_mempool_virt2iova() to obtain the physical address of
> > this memory. This shall have better performance.
> 
> I will have a try.
> 
> Regards,
> Jay
> 
> >
> >> +	virt_addr_started = rte_malloc(NULL,
> >> +		desc_offset +
> >> NUM_ENTRY_VIRTIO_CRYPTO_SYM_CREATE_SESSION
> >> +			* sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
> >> +	if (virt_addr_started == NULL) {
> >> +		PMD_SESSION_LOG(ERR, "not enough heap memory");
> >> +		return -ENOSPC;
> >> +	}
> >> +	phys_addr_started = rte_malloc_virt2phy(virt_addr_started);
> >
> > .
> >
  
Thomas Monjalon Jan. 20, 2018, 3:50 p.m. UTC | #4
Hi,

28/11/2017 02:27, Jay Zhou:
> For DPDK, I'm a newbie. Thanks for testing and pointing these steps
> out, will fix them in V2.

Any news about this work?

I see there is also a patch from Fan Zhang to support crypto in vhost-user.
Do you work together?
  
Thomas Monjalon Jan. 20, 2018, 3:54 p.m. UTC | #5
+Cc Pablo, maintainer of the crypto tree

20/01/2018 16:50, Thomas Monjalon:
> Hi,
> 
> 28/11/2017 02:27, Jay Zhou:
> > For DPDK, I'm a newbie. Thanks for testing and pointing these steps
> > out, will fix them in V2.
> 
> Any news about this work?
> 
> I see there is also a patch from Fan Zhang to support crypto in vhost-user.
> Do you work together?
  
Zhoujian (jay) Jan. 22, 2018, 7:37 a.m. UTC | #6
Hi Thomas,

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Saturday, January 20, 2018 11:50 PM
> To: Zhoujian (jay) <jianjay.zhou@huawei.com>
> Cc: dev@dpdk.org; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> yliu@fridaylinux.org; maxime.coquelin@redhat.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; Zeng, Xin <xin.zeng@intel.com>; Huangweidong (C)
> <weidong.huang@huawei.com>; wangxin (U) <wangxinxin.wang@huawei.com>;
> longpeng <longpeng2@huawei.com>; ferruh.yigit@intel.com
> Subject: Re: [dpdk-dev] [PATCH] virtio: add new driver for crypto devices
> 
> Hi,
> 
> 28/11/2017 02:27, Jay Zhou:
> > For DPDK, I'm a newbie. Thanks for testing and pointing these steps
> > out, will fix them in V2.
> 
> Any news about this work?

Another project is eating my time too, so it's a little late to send out v2...
Pls help to review then.

> 
> I see there is also a patch from Fan Zhang to support crypto in vhost-user.
> Do you work together?

Yes, we're working together :)


Regards,
Jay
  
Fan Zhang Jan. 22, 2018, 5:25 p.m. UTC | #7
Hi Thomas,

Yes we are working together on this. Jay and Lei kindly agreed to upstream their work.

Jay is also working on Qemu patches to enable vhost-user as crypto backend (https://patchwork.ozlabs.org/patch/864058/). 
Jay's Qemu patches' merging is on the way but may not happen before DPDK 18.02 is out. So I hope you don't mind we try to submit both vhost and virtio crypto patches again for 18.05.

Regards,
Fan

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Saturday, January 20, 2018 3:54 PM
> To: Jay Zhou <jianjay.zhou@huawei.com>
> Cc: dev@dpdk.org; Zhang, Roy Fan <roy.fan.zhang@intel.com>;
> yliu@fridaylinux.org; maxime.coquelin@redhat.com;
> arei.gonglei@huawei.com; Zeng, Xin <xin.zeng@intel.com>;
> weidong.huang@huawei.com; wangxinxin.wang@huawei.com;
> longpeng2@huawei.com; Yigit, Ferruh <ferruh.yigit@intel.com>; De Lara
> Guarch, Pablo <pablo.de.lara.guarch@intel.com>
> Subject: Re: [dpdk-dev] [PATCH] virtio: add new driver for crypto devices
> 
> +Cc Pablo, maintainer of the crypto tree
> 
> 20/01/2018 16:50, Thomas Monjalon:
> > Hi,
> >
> > 28/11/2017 02:27, Jay Zhou:
> > > For DPDK, I'm a newbie. Thanks for testing and pointing these steps
> > > out, will fix them in V2.
> >
> > Any news about this work?
> >
> > I see there is also a patch from Fan Zhang to support crypto in vhost-user.
> > Do you work together?
  
Thomas Monjalon Jan. 22, 2018, 9:01 p.m. UTC | #8
22/01/2018 18:25, Zhang, Roy Fan:
> Hi Thomas,
> 
> Yes we are working together on this. Jay and Lei kindly agreed to upstream their work.
> 
> Jay is also working on Qemu patches to enable vhost-user as crypto backend (https://patchwork.ozlabs.org/patch/864058/). 
> Jay's Qemu patches' merging is on the way but may not happen before DPDK 18.02 is out. So I hope you don't mind we try to submit both vhost and virtio crypto patches again for 18.05.

OK, let's try for 18.05.
Thanks
  
Fan Zhang Jan. 29, 2018, 5:19 p.m. UTC | #9
Hi Jay, 

A few more comments inline.

> -----Original Message-----
> From: Jay Zhou [mailto:jianjay.zhou@huawei.com]
> Sent: Friday, November 17, 2017 5:10 PM
> To: dev@dpdk.org
> Cc: yliu@fridaylinux.org; maxime.coquelin@redhat.com;
> arei.gonglei@huawei.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Zeng,
> Xin <xin.zeng@intel.com>; weidong.huang@huawei.com;
> wangxinxin.wang@huawei.com; longpeng2@huawei.com;
> jianjay.zhou@huawei.com
> Subject: [PATCH] virtio: add new driver for crypto devices
> 
> +++ b/drivers/crypto/virtio/virtio_crypto.h
> @@ -0,0 +1,452 @@

The file "virtio_crypto.h" is identical to the one in the linux kernel header, right? 
Could you use " #include <linux/virtio_crypto.h>" instead of creating a copy of the file?

> diff --git a/drivers/crypto/virtio/virtio_cryptodev.c
> b/drivers/crypto/virtio/virtio_cryptodev.c
> new file mode 100644
> index 0000000..9e6cd20
> --- /dev/null
> +++ b/drivers/crypto/virtio/virtio_cryptodev.c
> @@ -0,0 +1,1542 @@

...

> +	switch (cmd_id) {
> +	case VIRTIO_CRYPTO_CMD_CIPHER_HASH:
> +	case VIRTIO_CRYPTO_CMD_HASH_CIPHER:
> +		ctrl->u.sym_create_session.op_type
> +			= VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING;

The above line is clearly a bug.
 
> +		ret = virtio_crypto_sym_pad_op_ctrl_req(ctrl,
> +			xform, true, &cipher_key_data, &auth_key_data,
> session);
> +		if (ret < 0) {
> +			PMD_SESSION_LOG(ERR,
> +				"padding sym op ctrl req failed");
> +			goto error_out;
> +		}
> +		ret = virtio_crypto_send_command(vq, ctrl,
> +			cipher_key_data, auth_key_data, session);
> +		if (ret < 0) {
> +			PMD_SESSION_LOG(ERR,
> +				"create session failed: %d", ret);
> +			goto error_out;
> +		}
> +		break;
> +	case VIRTIO_CRYPTO_CMD_CIPHER:

Again, please try to replace the mallocs into rte_mempool_get() or rte_mempool_get_bulk() for performance reason.

Best regards,
Fan
  
Fan Zhang Jan. 29, 2018, 5:21 p.m. UTC | #10
Hi Jay,
 
> > +	switch (cmd_id) {
> > +	case VIRTIO_CRYPTO_CMD_CIPHER_HASH:
> > +	case VIRTIO_CRYPTO_CMD_HASH_CIPHER:
> > +		ctrl->u.sym_create_session.op_type
> > +			= VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING;
> 
> The above line is clearly a bug.

My bad, it isn't a bug.

Regards,
Fan
  
Zhoujian (jay) Jan. 30, 2018, 1:56 a.m. UTC | #11
Hi Fan,

On 2018/1/30 1:19, Zhang, Roy Fan wrote:
> Hi Jay,
>
> A few more comments inline.
>
>> -----Original Message-----
>> From: Jay Zhou [mailto:jianjay.zhou@huawei.com]
>> Sent: Friday, November 17, 2017 5:10 PM
>> To: dev@dpdk.org
>> Cc: yliu@fridaylinux.org; maxime.coquelin@redhat.com;
>> arei.gonglei@huawei.com; Zhang, Roy Fan <roy.fan.zhang@intel.com>; Zeng,
>> Xin <xin.zeng@intel.com>; weidong.huang@huawei.com;
>> wangxinxin.wang@huawei.com; longpeng2@huawei.com;
>> jianjay.zhou@huawei.com
>> Subject: [PATCH] virtio: add new driver for crypto devices
>>
>> +++ b/drivers/crypto/virtio/virtio_crypto.h
>> @@ -0,0 +1,452 @@
>
> The file "virtio_crypto.h" is identical to the one in the linux kernel header, right?

Yes.

> Could you use " #include <linux/virtio_crypto.h>" instead of creating a copy of the file?

Okay.

>
>> diff --git a/drivers/crypto/virtio/virtio_cryptodev.c
>> b/drivers/crypto/virtio/virtio_cryptodev.c
>> new file mode 100644
>> index 0000000..9e6cd20
>> --- /dev/null
>> +++ b/drivers/crypto/virtio/virtio_cryptodev.c
>> @@ -0,0 +1,1542 @@
>
> ...
>
>> +	switch (cmd_id) {
>> +	case VIRTIO_CRYPTO_CMD_CIPHER_HASH:
>> +	case VIRTIO_CRYPTO_CMD_HASH_CIPHER:
>> +		ctrl->u.sym_create_session.op_type
>> +			= VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING;
>
> The above line is clearly a bug.
>
>> +		ret = virtio_crypto_sym_pad_op_ctrl_req(ctrl,
>> +			xform, true, &cipher_key_data, &auth_key_data,
>> session);
>> +		if (ret < 0) {
>> +			PMD_SESSION_LOG(ERR,
>> +				"padding sym op ctrl req failed");
>> +			goto error_out;
>> +		}
>> +		ret = virtio_crypto_send_command(vq, ctrl,
>> +			cipher_key_data, auth_key_data, session);
>> +		if (ret < 0) {
>> +			PMD_SESSION_LOG(ERR,
>> +				"create session failed: %d", ret);
>> +			goto error_out;
>> +		}
>> +		break;
>> +	case VIRTIO_CRYPTO_CMD_CIPHER:
>
> Again, please try to replace the mallocs into rte_mempool_get() or rte_mempool_get_bulk() for performance reason.

Okay, will do. Thanks again for reviewing!

Regards,
Jay

>
> Best regards,
> Fan
>
>
>
>
> .
>
  

Patch

diff --git a/MAINTAINERS b/MAINTAINERS
index 8ab08d2..deccd45 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -519,6 +519,10 @@  F: drivers/net/virtio/
 F: doc/guides/nics/virtio.rst
 F: doc/guides/nics/features/virtio*.ini
 
+Virtio crypto PMD
+M: Jay Zhou <jianjay.zhou@huawei.com>
+F: drivers/crypto/virtio/
+
 Wind River AVP
 M: Allain Legacy <allain.legacy@windriver.com>
 M: Matt Peters <matt.peters@windriver.com>
diff --git a/config/common_base b/config/common_base
index 82ee754..011a6fe 100644
--- a/config/common_base
+++ b/config/common_base
@@ -504,6 +504,26 @@  CONFIG_RTE_LIBRTE_PMD_QAT_DEBUG_DRIVER=n
 CONFIG_RTE_QAT_PMD_MAX_NB_SESSIONS=2048
 
 #
+# Compile PMD for virtio crypto devices
+#
+CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO=n
+CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_INIT=n
+CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_SESSION=n
+CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_TX=n
+CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_RX=n
+CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_DRIVER=n
+CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_DUMP=n
+#
+# Number of maximum virtio crypto devices
+#
+CONFIG_RTE_MAX_VIRTIO_CRYPTO=32
+#
+# Number of sessions to create in the session memory pool
+# on a single virtio crypto device.
+#
+CONFIG_RTE_VIRTIO_CRYPTO_PMD_MAX_NB_SESSIONS=1024
+
+#
 # Compile PMD for AESNI backed device
 #
 CONFIG_RTE_LIBRTE_PMD_AESNI_MB=n
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 645b696..df44026 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -44,5 +44,6 @@  DIRS-$(CONFIG_RTE_LIBRTE_PMD_MRVL_CRYPTO) += mrvl
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += null
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA2_SEC) += dpaa2_sec
 DIRS-$(CONFIG_RTE_LIBRTE_PMD_DPAA_SEC) += dpaa_sec
+DIRS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += virtio
 
 include $(RTE_SDK)/mk/rte.subdir.mk
diff --git a/drivers/crypto/virtio/Makefile b/drivers/crypto/virtio/Makefile
new file mode 100644
index 0000000..084e657
--- /dev/null
+++ b/drivers/crypto/virtio/Makefile
@@ -0,0 +1,59 @@ 
+#   BSD LICENSE
+#
+#   Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+#   All rights reserved.
+#
+#   Redistribution and use in source and binary forms, with or without
+#   modification, are permitted provided that the following conditions
+#   are met:
+#
+#     * Redistributions of source code must retain the above copyright
+#       notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above copyright
+#       notice, this list of conditions and the following disclaimer in
+#       the documentation and/or other materials provided with the
+#       distribution.
+#     * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+#       its contributors may be used to endorse or promote products derived
+#       from this software without specific prior written permission.
+#
+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+#   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+include $(RTE_SDK)/mk/rte.vars.mk
+
+#
+# library name
+#
+LIB = librte_pmd_virtio_crypto.a
+
+CFLAGS += -O3
+CFLAGS += $(WERROR_FLAGS)
+
+EXPORT_MAP := rte_pmd_virtio_crypto_version.map
+
+LIBABIVER := 1
+
+#
+# all source are stored in SRCS-y
+#
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += virtqueue.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += virtio_pci.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += virtio_rxtx.c
+SRCS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += virtio_cryptodev.c
+
+# this lib depends upon:
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += lib/librte_eal
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += lib/librte_mempool lib/librte_mbuf
+DEPDIRS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += lib/librte_cryptodev
+
+include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/drivers/crypto/virtio/virtio_crypto.h b/drivers/crypto/virtio/virtio_crypto.h
new file mode 100644
index 0000000..ba94e80
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_crypto.h
@@ -0,0 +1,452 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _VIRTIO_CRYPTO_H_
+#define _VIRTIO_CRYPTO_H_
+
+#define VIRTIO_CRYPTO_SERVICE_CIPHER 0
+#define VIRTIO_CRYPTO_SERVICE_HASH   1
+#define VIRTIO_CRYPTO_SERVICE_MAC    2
+#define VIRTIO_CRYPTO_SERVICE_AEAD   3
+
+#define VIRTIO_CRYPTO_OPCODE(service, op)   (((service) << 8) | (op))
+
+struct virtio_crypto_ctrl_header {
+#define VIRTIO_CRYPTO_CIPHER_CREATE_SESSION \
+	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_CIPHER, 0x02)
+#define VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION \
+	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_CIPHER, 0x03)
+#define VIRTIO_CRYPTO_HASH_CREATE_SESSION \
+	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_HASH, 0x02)
+#define VIRTIO_CRYPTO_HASH_DESTROY_SESSION \
+	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_HASH, 0x03)
+#define VIRTIO_CRYPTO_MAC_CREATE_SESSION \
+	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_MAC, 0x02)
+#define VIRTIO_CRYPTO_MAC_DESTROY_SESSION \
+	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_MAC, 0x03)
+#define VIRTIO_CRYPTO_AEAD_CREATE_SESSION \
+	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x02)
+#define VIRTIO_CRYPTO_AEAD_DESTROY_SESSION \
+	   VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x03)
+	uint32_t opcode;
+	uint32_t algo;
+	uint32_t flag;
+	/* data virtqueue id */
+	uint32_t queue_id;
+};
+
+struct virtio_crypto_cipher_session_para {
+#define VIRTIO_CRYPTO_NO_CIPHER                 0
+#define VIRTIO_CRYPTO_CIPHER_ARC4               1
+#define VIRTIO_CRYPTO_CIPHER_AES_ECB            2
+#define VIRTIO_CRYPTO_CIPHER_AES_CBC            3
+#define VIRTIO_CRYPTO_CIPHER_AES_CTR            4
+#define VIRTIO_CRYPTO_CIPHER_DES_ECB            5
+#define VIRTIO_CRYPTO_CIPHER_DES_CBC            6
+#define VIRTIO_CRYPTO_CIPHER_3DES_ECB           7
+#define VIRTIO_CRYPTO_CIPHER_3DES_CBC           8
+#define VIRTIO_CRYPTO_CIPHER_3DES_CTR           9
+#define VIRTIO_CRYPTO_CIPHER_KASUMI_F8          10
+#define VIRTIO_CRYPTO_CIPHER_SNOW3G_UEA2        11
+#define VIRTIO_CRYPTO_CIPHER_AES_F8             12
+#define VIRTIO_CRYPTO_CIPHER_AES_XTS            13
+#define VIRTIO_CRYPTO_CIPHER_ZUC_EEA3           14
+	uint32_t algo;
+	/* length of key */
+	uint32_t keylen;
+
+#define VIRTIO_CRYPTO_OP_ENCRYPT  1
+#define VIRTIO_CRYPTO_OP_DECRYPT  2
+	/* encrypt or decrypt */
+	uint32_t op;
+	uint32_t padding;
+};
+
+struct virtio_crypto_session_input {
+	/* Device-writable part */
+	uint64_t session_id;
+	uint32_t status;
+	uint32_t padding;
+};
+
+struct virtio_crypto_cipher_session_req {
+	struct virtio_crypto_cipher_session_para para;
+	uint8_t padding[32];
+};
+
+struct virtio_crypto_hash_session_para {
+#define VIRTIO_CRYPTO_NO_HASH            0
+#define VIRTIO_CRYPTO_HASH_MD5           1
+#define VIRTIO_CRYPTO_HASH_SHA1          2
+#define VIRTIO_CRYPTO_HASH_SHA_224       3
+#define VIRTIO_CRYPTO_HASH_SHA_256       4
+#define VIRTIO_CRYPTO_HASH_SHA_384       5
+#define VIRTIO_CRYPTO_HASH_SHA_512       6
+#define VIRTIO_CRYPTO_HASH_SHA3_224      7
+#define VIRTIO_CRYPTO_HASH_SHA3_256      8
+#define VIRTIO_CRYPTO_HASH_SHA3_384      9
+#define VIRTIO_CRYPTO_HASH_SHA3_512      10
+#define VIRTIO_CRYPTO_HASH_SHA3_SHAKE128      11
+#define VIRTIO_CRYPTO_HASH_SHA3_SHAKE256      12
+	uint32_t algo;
+	/* hash result length */
+	uint32_t hash_result_len;
+	uint8_t padding[8];
+};
+
+struct virtio_crypto_hash_create_session_req {
+	struct virtio_crypto_hash_session_para para;
+	uint8_t padding[40];
+};
+
+struct virtio_crypto_mac_session_para {
+#define VIRTIO_CRYPTO_NO_MAC                       0
+#define VIRTIO_CRYPTO_MAC_HMAC_MD5                 1
+#define VIRTIO_CRYPTO_MAC_HMAC_SHA1                2
+#define VIRTIO_CRYPTO_MAC_HMAC_SHA_224             3
+#define VIRTIO_CRYPTO_MAC_HMAC_SHA_256             4
+#define VIRTIO_CRYPTO_MAC_HMAC_SHA_384             5
+#define VIRTIO_CRYPTO_MAC_HMAC_SHA_512             6
+#define VIRTIO_CRYPTO_MAC_CMAC_3DES                25
+#define VIRTIO_CRYPTO_MAC_CMAC_AES                 26
+#define VIRTIO_CRYPTO_MAC_KASUMI_F9                27
+#define VIRTIO_CRYPTO_MAC_SNOW3G_UIA2              28
+#define VIRTIO_CRYPTO_MAC_GMAC_AES                 41
+#define VIRTIO_CRYPTO_MAC_GMAC_TWOFISH             42
+#define VIRTIO_CRYPTO_MAC_CBCMAC_AES               49
+#define VIRTIO_CRYPTO_MAC_CBCMAC_KASUMI_F9         50
+#define VIRTIO_CRYPTO_MAC_XCBC_AES                 53
+	uint32_t algo;
+	/* hash result length */
+	uint32_t hash_result_len;
+	/* length of authenticated key */
+	uint32_t auth_key_len;
+	uint32_t padding;
+};
+
+struct virtio_crypto_mac_create_session_req {
+	struct virtio_crypto_mac_session_para para;
+	uint8_t padding[40];
+};
+
+struct virtio_crypto_aead_session_para {
+#define VIRTIO_CRYPTO_NO_AEAD     0
+#define VIRTIO_CRYPTO_AEAD_GCM    1
+#define VIRTIO_CRYPTO_AEAD_CCM    2
+#define VIRTIO_CRYPTO_AEAD_CHACHA20_POLY1305  3
+	uint32_t algo;
+	/* length of key */
+	uint32_t key_len;
+	/* hash result length */
+	uint32_t hash_result_len;
+	/* length of the additional authenticated data (AAD) in bytes */
+	uint32_t aad_len;
+	/* encrypt or decrypt, See above VIRTIO_CRYPTO_OP_* */
+	uint32_t op;
+	uint32_t padding;
+};
+
+struct virtio_crypto_aead_create_session_req {
+	struct virtio_crypto_aead_session_para para;
+	uint8_t padding[32];
+};
+
+struct virtio_crypto_alg_chain_session_para {
+#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER  1
+#define VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH  2
+	uint32_t alg_chain_order;
+/* Plain hash */
+#define VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN    1
+/* Authenticated hash (mac) */
+#define VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH     2
+/* Nested hash */
+#define VIRTIO_CRYPTO_SYM_HASH_MODE_NESTED   3
+	uint32_t hash_mode;
+	struct virtio_crypto_cipher_session_para cipher_param;
+	union {
+		struct virtio_crypto_hash_session_para hash_param;
+		struct virtio_crypto_mac_session_para mac_param;
+		uint8_t padding[16];
+	} u;
+	/* length of the additional authenticated data (AAD) in bytes */
+	uint32_t aad_len;
+	uint32_t padding;
+};
+
+struct virtio_crypto_alg_chain_session_req {
+	struct virtio_crypto_alg_chain_session_para para;
+};
+
+struct virtio_crypto_sym_create_session_req {
+	union {
+		struct virtio_crypto_cipher_session_req cipher;
+		struct virtio_crypto_alg_chain_session_req chain;
+		uint8_t padding[48];
+	} u;
+
+	/* Device-readable part */
+
+/* No operation */
+#define VIRTIO_CRYPTO_SYM_OP_NONE  0
+/* Cipher only operation on the data */
+#define VIRTIO_CRYPTO_SYM_OP_CIPHER  1
+/*
+ * Chain any cipher with any hash or mac operation. The order
+ * depends on the value of alg_chain_order param
+ */
+#define VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING  2
+	uint32_t op_type;
+	uint32_t padding;
+};
+
+struct virtio_crypto_destroy_session_req {
+	/* Device-readable part */
+	uint64_t  session_id;
+	uint8_t padding[48];
+};
+
+/* The request of the control virtqueue's packet */
+struct virtio_crypto_op_ctrl_req {
+	struct virtio_crypto_ctrl_header header;
+
+	union {
+		struct virtio_crypto_sym_create_session_req
+			sym_create_session;
+		struct virtio_crypto_hash_create_session_req
+			hash_create_session;
+		struct virtio_crypto_mac_create_session_req
+			mac_create_session;
+		struct virtio_crypto_aead_create_session_req
+			aead_create_session;
+		struct virtio_crypto_destroy_session_req
+			destroy_session;
+		uint8_t padding[56];
+	} u;
+};
+
+struct virtio_crypto_op_header {
+#define VIRTIO_CRYPTO_CIPHER_ENCRYPT \
+	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_CIPHER, 0x00)
+#define VIRTIO_CRYPTO_CIPHER_DECRYPT \
+	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_CIPHER, 0x01)
+#define VIRTIO_CRYPTO_HASH \
+	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_HASH, 0x00)
+#define VIRTIO_CRYPTO_MAC \
+	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_MAC, 0x00)
+#define VIRTIO_CRYPTO_AEAD_ENCRYPT \
+	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x00)
+#define VIRTIO_CRYPTO_AEAD_DECRYPT \
+	VIRTIO_CRYPTO_OPCODE(VIRTIO_CRYPTO_SERVICE_AEAD, 0x01)
+	uint32_t opcode;
+	/* algo should be service-specific algorithms */
+	uint32_t algo;
+	/* session_id should be service-specific algorithms */
+	uint64_t session_id;
+	/* control flag to control the request */
+	uint32_t flag;
+	uint32_t padding;
+};
+
+struct virtio_crypto_cipher_para {
+	/*
+	 * Byte Length of valid IV/Counter
+	 *
+	 * For block ciphers in CBC or F8 mode, or for Kasumi in F8 mode, or for
+	 *   SNOW3G in UEA2 mode, this is the length of the IV (which
+	 *   must be the same as the block length of the cipher).
+	 * For block ciphers in CTR mode, this is the length of the counter
+	 *   (which must be the same as the block length of the cipher).
+	 * For AES-XTS, this is the 128bit tweak, i, from IEEE Std 1619-2007.
+	 *
+	 * The IV/Counter will be updated after every partial cryptographic
+	 * operation.
+	 */
+	uint32_t iv_len;
+	/* length of source data */
+	uint32_t src_data_len;
+	/* length of dst data */
+	uint32_t dst_data_len;
+	uint32_t padding;
+};
+
+struct virtio_crypto_hash_para {
+	/* length of source data */
+	uint32_t src_data_len;
+	/* hash result length */
+	uint32_t hash_result_len;
+};
+
+struct virtio_crypto_mac_para {
+	struct virtio_crypto_hash_para hash;
+};
+
+struct virtio_crypto_aead_para {
+	/*
+	 * Byte Length of valid IV data pointed to by the below iv_addr
+	 * parameter.
+	 *
+	 * For GCM mode, this is either 12 (for 96-bit IVs) or 16, in which
+	 *   case iv_addr points to J0.
+	 * For CCM mode, this is the length of the nonce, which can be in the
+	 *   range 7 to 13 inclusive.
+	 */
+	uint32_t iv_len;
+	/* length of additional auth data */
+	uint32_t aad_len;
+	/* length of source data */
+	uint32_t src_data_len;
+	/* length of dst data */
+	uint32_t dst_data_len;
+};
+
+struct virtio_crypto_cipher_data_req {
+	/* Device-readable part */
+	struct virtio_crypto_cipher_para para;
+	uint8_t padding[24];
+};
+
+struct virtio_crypto_hash_data_req {
+	/* Device-readable part */
+	struct virtio_crypto_hash_para para;
+	uint8_t padding[40];
+};
+
+struct virtio_crypto_mac_data_req {
+	/* Device-readable part */
+	struct virtio_crypto_mac_para para;
+	uint8_t padding[40];
+};
+
+struct virtio_crypto_alg_chain_data_para {
+	uint32_t iv_len;
+	/* Length of source data */
+	uint32_t src_data_len;
+	/* Length of destination data */
+	uint32_t dst_data_len;
+	/* Starting point for cipher processing in source data */
+	uint32_t cipher_start_src_offset;
+	/* Length of the source data that the cipher will be computed on */
+	uint32_t len_to_cipher;
+	/* Starting point for hash processing in source data */
+	uint32_t hash_start_src_offset;
+	/* Length of the source data that the hash will be computed on */
+	uint32_t len_to_hash;
+	/* Length of the additional auth data */
+	uint32_t aad_len;
+	/* Length of the hash result */
+	uint32_t hash_result_len;
+	uint32_t reserved;
+};
+
+struct virtio_crypto_alg_chain_data_req {
+	/* Device-readable part */
+	struct virtio_crypto_alg_chain_data_para para;
+};
+
+struct virtio_crypto_sym_data_req {
+	union {
+		struct virtio_crypto_cipher_data_req cipher;
+		struct virtio_crypto_alg_chain_data_req chain;
+		uint8_t padding[40];
+	} u;
+
+	/* See above VIRTIO_CRYPTO_SYM_OP_* */
+	uint32_t op_type;
+	uint32_t padding;
+};
+
+struct virtio_crypto_aead_data_req {
+	/* Device-readable part */
+	struct virtio_crypto_aead_para para;
+	uint8_t padding[32];
+};
+
+/* The request of the data virtqueue's packet */
+struct virtio_crypto_op_data_req {
+	struct virtio_crypto_op_header header;
+
+	union {
+		struct virtio_crypto_sym_data_req  sym_req;
+		struct virtio_crypto_hash_data_req hash_req;
+		struct virtio_crypto_mac_data_req mac_req;
+		struct virtio_crypto_aead_data_req aead_req;
+		uint8_t padding[48];
+	} u;
+};
+
+#define VIRTIO_CRYPTO_OK        0
+#define VIRTIO_CRYPTO_ERR       1
+#define VIRTIO_CRYPTO_BADMSG    2
+#define VIRTIO_CRYPTO_NOTSUPP   3
+#define VIRTIO_CRYPTO_INVSESS   4 /* Invalid session id */
+
+/* The accelerator hardware is ready */
+#define VIRTIO_CRYPTO_S_HW_READY  (1 << 0)
+
+struct virtio_crypto_config {
+	/* See VIRTIO_CRYPTO_OP_* above */
+	uint32_t  status;
+
+	/*
+	 * Maximum number of data queue
+	 */
+	uint32_t  max_dataqueues;
+
+	/*
+	 * Specifies the services mask which the device support,
+	 * see VIRTIO_CRYPTO_SERVICE_* above
+	 */
+	uint32_t crypto_services;
+
+	/* Detailed algorithms mask */
+	uint32_t cipher_algo_l;
+	uint32_t cipher_algo_h;
+	uint32_t hash_algo;
+	uint32_t mac_algo_l;
+	uint32_t mac_algo_h;
+	uint32_t aead_algo;
+	/* Maximum length of cipher key */
+	uint32_t max_cipher_key_len;
+	/* Maximum length of authenticated key */
+	uint32_t max_auth_key_len;
+	uint32_t reserve;
+	/* Maximum size of each crypto request's content */
+	uint64_t max_size;
+};
+
+struct virtio_crypto_inhdr {
+	/* See VIRTIO_CRYPTO_* above */
+	uint8_t status;
+};
+
+#endif /* _VIRTIO_CRYPTO_H_ */
diff --git a/drivers/crypto/virtio/virtio_crypto_algs.h b/drivers/crypto/virtio/virtio_crypto_algs.h
new file mode 100644
index 0000000..76dc6f6
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_crypto_algs.h
@@ -0,0 +1,56 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _VIRTIO_CRYPTO_ALGS_H_
+#define _VIRTIO_CRYPTO_ALGS_H_
+
+#include <rte_memory.h>
+#include "virtio_crypto.h"
+
+struct virtio_crypto_session {
+	uint64_t session_id;
+
+	struct {
+		uint16_t offset;
+		uint16_t length;
+	} iv;
+
+	struct {
+		uint32_t length;
+		phys_addr_t phys_addr;
+	} aad;
+
+	struct virtio_crypto_op_ctrl_req ctrl;
+};
+
+#endif /* _VIRTIO_CRYPTO_ALGS_H_ */
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
new file mode 100644
index 0000000..9e6cd20
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -0,0 +1,1542 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <string.h>
+#include <stdio.h>
+#include <stdbool.h>
+#include <errno.h>
+#include <unistd.h>
+#ifdef RTE_EXEC_ENV_LINUXAPP
+#include <dirent.h>
+#include <fcntl.h>
+#endif
+
+#include <rte_cryptodev.h>
+#include <rte_cryptodev_pmd.h>
+#include <rte_memcpy.h>
+#include <rte_string_fns.h>
+#include <rte_memzone.h>
+#include <rte_malloc.h>
+#include <rte_atomic.h>
+#include <rte_branch_prediction.h>
+#include <rte_pci.h>
+#include <rte_common.h>
+#include <rte_errno.h>
+
+#include <rte_memory.h>
+#include <rte_eal.h>
+#include <rte_dev.h>
+#include <rte_log.h>
+
+#include "virtio_crypto.h"
+#include "virtio_cryptodev.h"
+#include "virtio_logs.h"
+#include "virtqueue.h"
+#include "virtio_crypto_algs.h"
+
+static int virtio_crypto_dev_configure(struct rte_cryptodev *dev,
+		struct rte_cryptodev_config *config);
+static int virtio_crypto_dev_start(struct rte_cryptodev *dev);
+static void virtio_crypto_dev_stop(struct rte_cryptodev *dev);
+static int virtio_crypto_dev_close(struct rte_cryptodev *dev);
+static void virtio_crypto_dev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *dev_info);
+static void virtio_crypto_dev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats);
+static void virtio_crypto_dev_stats_reset(struct rte_cryptodev *dev);
+static int virtio_crypto_qp_setup(struct rte_cryptodev *dev,
+		uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id,
+		struct rte_mempool *session_pool);
+static int virtio_crypto_qp_release(struct rte_cryptodev *dev,
+		uint16_t queue_pair_id);
+static void virtio_crypto_dev_free_mbufs(struct rte_cryptodev *dev);
+static unsigned virtio_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev);
+static void virtio_crypto_sym_clear_session(		struct rte_cryptodev *dev,
+		struct rte_cryptodev_sym_session *sess);
+static int virtio_crypto_sym_configure_session(struct rte_cryptodev *dev,
+		struct rte_crypto_sym_xform *xform,
+		struct rte_cryptodev_sym_session *session,
+		struct rte_mempool *mp);
+
+/*
+ * The set of PCI devices this driver supports
+ */
+static const struct rte_pci_id pci_id_virtio_crypto_map[] = {
+	{ RTE_PCI_DEVICE(VIRTIO_CRYPTO_PCI_VENDORID,
+						VIRTIO_CRYPTO_PCI_LEGACY_DEVICEID) },
+	{ RTE_PCI_DEVICE(VIRTIO_CRYPTO_PCI_VENDORID,
+						VIRTIO_CRYPTO_PCI_MODERN_DEVICEID) },
+	{ .vendor_id = 0, /* sentinel */ },
+};
+
+uint8_t cryptodev_virtio_driver_id;
+
+#define NUM_ENTRY_VIRTIO_CRYPTO_SYM_CREATE_SESSION 4
+
+static int
+virtio_crypto_send_command(struct virtqueue *vq,
+		struct virtio_crypto_op_ctrl_req *ctrl, uint8_t *cipher_key,
+		uint8_t *auth_key, struct virtio_crypto_session *session)
+{
+	uint8_t idx;
+	uint8_t needed = 1;
+	uint32_t head;
+	uint32_t cipher_keylen = 0;
+	uint32_t auth_keylen = 0;
+	uint32_t ctrl_req_length = sizeof(struct virtio_crypto_op_ctrl_req);
+	uint32_t input_length = sizeof(struct virtio_crypto_session_input);
+	uint32_t total_size = 0;
+	uint32_t input_offset = 0;
+	void *virt_addr_started = NULL;
+	phys_addr_t phys_addr_started;
+	struct vring_desc *desc;
+	uint32_t desc_offset;
+	struct virtio_crypto_session_input *input;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (vq == NULL) {
+		PMD_SESSION_LOG(ERR, "vq is NULL");
+		return -EINVAL;
+	}
+	if (ctrl == NULL) {
+		PMD_SESSION_LOG(ERR, "ctrl is NULL.");
+		return -EINVAL;
+	}
+	if (session == NULL) {
+		PMD_SESSION_LOG(ERR, "session is NULL.");
+		return -EINVAL;
+	}
+
+	/* cipher only is supported, it is available if auth_key is NULL */
+	if (!cipher_key) {
+		PMD_SESSION_LOG(ERR, "cipher key is NULL.");
+		return -EINVAL;
+	}
+	head = vq->vq_desc_head_idx;
+	PMD_INIT_LOG(DEBUG, "vq->vq_desc_head_idx = %d, vq = %p", head, vq);
+
+	if (vq->vq_free_cnt < needed) {
+		PMD_SESSION_LOG(ERR, "Not enough entry");
+		return -ENOSPC;
+	}
+
+	/* calculate the length of cipher key */
+	if (cipher_key) {
+		switch (ctrl->u.sym_create_session.op_type) {
+		case VIRTIO_CRYPTO_SYM_OP_CIPHER:
+			cipher_keylen
+				= ctrl->u.sym_create_session.u.cipher
+							.para.keylen;
+			break;
+		case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
+			cipher_keylen
+				= ctrl->u.sym_create_session.u.chain
+					.para.cipher_param.keylen;
+			break;
+		default:
+			PMD_SESSION_LOG(ERR, "invalid op type");
+			return -EINVAL;
+		}
+	}
+
+	/* calculate the length of auth key */
+	if (auth_key) {
+		auth_keylen =
+			ctrl->u.sym_create_session.u.chain.para.u.mac_param
+				.auth_key_len;
+	}
+
+	/*
+	 * malloc memory to store indirect vring_desc entries, including
+	 * ctrl request, cipher key, auth key, session input and desc vring
+	 */
+	desc_offset = ctrl_req_length + cipher_keylen + auth_keylen
+		+ input_length;
+	virt_addr_started = rte_malloc(NULL,
+		desc_offset + NUM_ENTRY_VIRTIO_CRYPTO_SYM_CREATE_SESSION
+			* sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+	if (virt_addr_started == NULL) {
+		PMD_SESSION_LOG(ERR, "not enough heap memory");
+		return -ENOSPC;
+	}
+	phys_addr_started = rte_malloc_virt2phy(virt_addr_started);
+
+	/* address to store indirect vring desc entries */
+	desc = (struct vring_desc *)
+		((uint8_t *)virt_addr_started + desc_offset);
+
+	idx = 0;
+	/*  ctrl req part */
+	memcpy(virt_addr_started, ctrl, ctrl_req_length);
+	desc[idx].addr = phys_addr_started;
+	desc[idx].len = ctrl_req_length;
+	desc[idx].flags = VRING_DESC_F_NEXT;
+	desc[idx].next = idx + 1;
+	idx++;
+	total_size += ctrl_req_length;
+	input_offset += ctrl_req_length;
+
+	/* cipher key part */
+	if (cipher_keylen > 0) {
+		memcpy((uint8_t *)virt_addr_started + total_size,
+			cipher_key, cipher_keylen);
+
+		desc[idx].addr = phys_addr_started + total_size;
+		desc[idx].len = cipher_keylen;
+		desc[idx].flags = VRING_DESC_F_NEXT;
+		desc[idx].next = idx + 1;
+		idx++;
+
+		total_size += cipher_keylen;
+		input_offset += cipher_keylen;
+	}
+	/* auth key part */
+	if (auth_keylen > 0) {
+		memcpy((uint8_t *)virt_addr_started + total_size,
+			auth_key, auth_keylen);
+
+		desc[idx].addr = phys_addr_started + total_size;
+		desc[idx].len = auth_keylen;
+		desc[idx].flags = VRING_DESC_F_NEXT;
+		desc[idx].next = idx + 1;
+		idx++;
+		total_size += auth_keylen;
+		input_offset += auth_keylen;
+	}
+
+	/* input part */
+	input = (struct virtio_crypto_session_input *)
+		((uint8_t *)virt_addr_started + input_offset);
+	input->status = VIRTIO_CRYPTO_ERR;
+	input->session_id = ~0ULL;
+	desc[idx].addr = phys_addr_started + total_size;
+	desc[idx].len = input_length;
+	desc[idx].flags = VRING_DESC_F_WRITE;
+	idx++;
+
+	/* use a single buffer */
+	vq->vq_ring.desc[head].flags = VRING_DESC_F_INDIRECT;
+	vq->vq_ring.desc[head].addr = phys_addr_started + desc_offset;
+	vq->vq_ring.desc[head].len = idx * sizeof(struct vring_desc);
+	vq->vq_free_cnt--;
+
+	vq->vq_desc_head_idx = vq->vq_ring.desc[head].next;
+
+	vq_crypto_update_avail_ring(vq, head);
+	vq_crypto_update_avail_idx(vq);
+
+	PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
+
+	virtqueue_crypto_notify(vq);
+
+	rte_rmb();
+	while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
+		rte_rmb();
+		usleep(100);
+	}
+
+	while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
+		uint32_t idx, desc_idx, used_idx;
+		struct vring_used_elem *uep;
+
+		used_idx = (uint32_t)(vq->vq_used_cons_idx
+				& (vq->vq_nentries - 1));
+		uep = &vq->vq_ring.used->ring[used_idx];
+		idx = (uint32_t) uep->id;
+		desc_idx = idx;
+
+		while (vq->vq_ring.desc[desc_idx].flags & VRING_DESC_F_NEXT) {
+			desc_idx = vq->vq_ring.desc[desc_idx].next;
+			vq->vq_free_cnt++;
+		}
+
+		vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+		vq->vq_desc_head_idx = idx;
+
+		vq->vq_used_cons_idx++;
+		vq->vq_free_cnt++;
+	}
+
+	PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\nvq->vq_desc_head_idx=%d",
+			vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+	/* get the result */
+	if (input->status != VIRTIO_CRYPTO_OK) {
+		PMD_SESSION_LOG(ERR, "Something wrong on backend! "
+				"status=%"PRIu32", session_id=%"PRIu64"",
+				input->status, input->session_id);
+		rte_free(virt_addr_started);
+		return -1;
+	} else {
+		session->session_id = input->session_id;
+
+		PMD_SESSION_LOG(INFO, "Create session successfully, "
+				"session_id=%"PRIu64"", input->session_id);
+		rte_free(virt_addr_started);
+		return 0;
+	}
+}
+
+void virtio_crypto_queue_release(struct virtqueue *vq) {
+	struct virtio_crypto_hw *hw;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (vq) {
+		hw = vq->hw;
+		/* Select and deactivate the queue */
+		VTPCI_OPS(hw)->del_queue(hw, vq);
+
+		rte_memzone_free(vq->mz);
+		rte_free(vq);
+	}
+}
+
+int virtio_crypto_queue_setup(struct rte_cryptodev *dev,
+		int queue_type,
+		uint16_t vtpci_queue_idx,
+		uint16_t nb_desc,
+		int socket_id,
+		struct virtqueue **pvq)
+{
+	char vq_name[VIRTQUEUE_MAX_NAME_SZ];
+	const struct rte_memzone *mz;
+	unsigned int vq_size, size;
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+	struct virtqueue *vq = NULL;
+
+	PMD_INIT_FUNC_TRACE();
+
+	PMD_INIT_LOG(DEBUG, "setting up queue: %u", vtpci_queue_idx);
+
+	/*
+	 * Read the virtqueue size from the Queue Size field
+	 * Always power of 2 and if 0 virtqueue does not exist
+	 */
+	vq_size = VTPCI_OPS(hw)->get_queue_num(hw, vtpci_queue_idx);
+	if (vq_size == 0) {
+		PMD_INIT_LOG(ERR, "virtqueue does not exist");
+		return -EINVAL;
+	}
+	PMD_INIT_LOG(DEBUG, "vq_size: %u", vq_size);
+
+	if (!rte_is_power_of_2(vq_size)) {
+		PMD_INIT_LOG(ERR, "virtqueue size is not powerof 2");
+		return -EINVAL;
+	}
+
+	if (queue_type == VTCRYPTO_DATAQ)
+		snprintf(vq_name, sizeof(vq_name), "dev%d_dataqueue%d",
+				dev->data->dev_id, vtpci_queue_idx);
+	else if (queue_type == VTCRYPTO_CTRLQ)
+		snprintf(vq_name, sizeof(vq_name), "dev%d_controlqueue",
+				dev->data->dev_id);
+	size = RTE_ALIGN_CEIL(sizeof(*vq) +
+				vq_size * sizeof(struct vq_desc_extra),
+				RTE_CACHE_LINE_SIZE);
+	vq = rte_zmalloc_socket(vq_name, size, RTE_CACHE_LINE_SIZE,
+				socket_id);
+	if (vq == NULL) {
+		PMD_INIT_LOG(ERR, "Can not allocate virtqueue");
+		return -ENOMEM;
+	}
+
+	vq->hw = hw;
+	vq->dev_id = dev->data->dev_id;
+	vq->vq_queue_index = vtpci_queue_idx;
+	vq->vq_nentries = vq_size;
+
+	/*
+	 * Using part of the vring entries is permitted, but the maximum
+	 * is vq_size
+	 */
+	if (nb_desc == 0 || nb_desc > vq_size)
+		nb_desc = vq_size;
+	vq->vq_free_cnt = nb_desc;
+
+	/*
+	 * Reserve a memzone for vring elements
+	 */
+	size = vring_crypto_size(vq_size, VIRTIO_PCI_VRING_ALIGN);
+	vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN);
+	PMD_INIT_LOG(DEBUG, "%s vring_size: %d, rounded_vring_size: %d",
+			(queue_type == VTCRYPTO_DATAQ) ? "dataq" : "ctrlq",
+			size, vq->vq_ring_size);
+
+	mz = rte_memzone_reserve_aligned(vq_name, vq->vq_ring_size,
+			socket_id, 0, VIRTIO_PCI_VRING_ALIGN);
+	if (mz == NULL) {
+		if (rte_errno == EEXIST)
+			mz = rte_memzone_lookup(vq_name);
+		if (mz == NULL) {
+			PMD_INIT_LOG(ERR, "not enough memory");
+			rte_free(vq);
+			return -ENOMEM;
+		}
+	}
+
+	/*
+	 * Virtio PCI device VIRTIO_PCI_QUEUE_PF register is 32bit,
+	 * and only accepts 32 bit page frame number.
+	 * Check if the allocated physical memory exceeds 16TB.
+	 */
+	if ((mz->phys_addr + vq->vq_ring_size - 1)
+				>> (VIRTIO_PCI_QUEUE_ADDR_SHIFT + 32)) {
+		PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!");
+		rte_free(vq);
+		rte_memzone_free(mz);
+		return -ENOMEM;
+	}
+
+	memset(mz->addr, 0, sizeof(mz->len));
+	vq->mz = mz;
+	vq->vq_ring_mem = mz->phys_addr;
+	vq->vq_ring_virt_mem = mz->addr;
+	PMD_INIT_LOG(DEBUG, "vq->vq_ring_mem(physical): 0x%"PRIx64,
+					(uint64_t)mz->phys_addr);
+	PMD_INIT_LOG(DEBUG, "vq->vq_ring_virt_mem: 0x%"PRIx64,
+					(uint64_t)(uintptr_t)mz->addr);
+
+	*pvq = vq;
+
+	return 0;
+}
+
+static int
+virtio_crypto_cq_setup(struct rte_cryptodev *dev, uint16_t queue_idx)
+{
+	struct virtqueue *vq;
+	int ret;
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	/* if virtio dev is started, do not touch the virtqueues */
+	if (dev->data->dev_started)
+		return 0;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = virtio_crypto_queue_setup(dev, VTCRYPTO_CTRLQ, queue_idx,
+			0, SOCKET_ID_ANY, &vq);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR, "control vq initialization failed");
+		return ret;
+	}
+
+	hw->cvq = vq;
+
+	return 0;
+}
+
+static void
+virtio_crypto_free_queues(struct rte_cryptodev *dev)
+{
+	unsigned int i;
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < hw->max_dataqueues; i++)
+		virtio_crypto_queue_release(dev->data->queue_pairs[i]);
+}
+
+static int
+virtio_crypto_dev_close(struct rte_cryptodev *dev __rte_unused)
+{
+	return 0;
+}
+
+/*
+ * dev_ops for virtio, bare necessities for basic operation
+ */
+static struct rte_cryptodev_ops virtio_crypto_dev_ops = {
+	/* Device related operations */
+	.dev_configure			 = virtio_crypto_dev_configure,
+	.dev_start			     = virtio_crypto_dev_start,
+	.dev_stop			     = virtio_crypto_dev_stop,
+	.dev_close			     = virtio_crypto_dev_close,
+	.dev_infos_get			 = virtio_crypto_dev_info_get,
+
+	.stats_get			     = virtio_crypto_dev_stats_get,
+	.stats_reset			 = virtio_crypto_dev_stats_reset,
+
+	.queue_pair_setup                = virtio_crypto_qp_setup,
+	.queue_pair_release              = virtio_crypto_qp_release,
+	.queue_pair_start                = NULL,
+	.queue_pair_stop                 = NULL,
+	.queue_pair_count                = NULL,
+
+	/* Crypto related operations */
+	.session_get_size	= virtio_crypto_sym_get_session_private_size,
+	.session_configure	= virtio_crypto_sym_configure_session,
+	.session_clear		= virtio_crypto_sym_clear_session,
+	.qp_attach_session = NULL,
+	.qp_detach_session = NULL
+};
+
+static void
+virtio_crypto_update_stats(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	unsigned i;
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (stats == NULL) {
+		PMD_DRV_LOG(ERR, "invalid pointer");
+		return;
+	}
+
+	for (i = 0; i < hw->max_dataqueues; i++) {
+		const struct virtqueue *data_queue
+			= dev->data->queue_pairs[i];
+		if (data_queue == NULL)
+			continue;
+
+		stats->enqueued_count += data_queue->packets_sent_total;
+		stats->enqueue_err_count += data_queue->packets_sent_failed;
+
+		stats->dequeued_count += data_queue->packets_received_total;
+		stats->dequeue_err_count
+			+= data_queue->packets_received_failed;
+	}
+}
+
+static void
+virtio_crypto_dev_stats_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_stats *stats)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	virtio_crypto_update_stats(dev, stats);
+}
+
+static void
+virtio_crypto_dev_stats_reset(struct rte_cryptodev *dev)
+{
+	unsigned int i;
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	for (i = 0; i < hw->max_dataqueues; i++) {
+		struct virtqueue *data_queue = dev->data->queue_pairs[i];
+		if (data_queue == NULL)
+			continue;
+
+		data_queue->packets_sent_total = 0;
+		data_queue->packets_sent_failed = 0;
+
+		data_queue->packets_received_total = 0;
+		data_queue->packets_received_failed = 0;
+	}
+}
+
+static int
+virtio_crypto_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
+		const struct rte_cryptodev_qp_conf *qp_conf,
+		int socket_id,
+		struct rte_mempool *session_pool __rte_unused)
+{
+	int ret;
+	struct virtqueue *vq;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* if virtio dev is started, do not touch the virtqueues */
+	if (dev->data->dev_started)
+		return 0;
+
+	ret = virtio_crypto_queue_setup(dev, VTCRYPTO_DATAQ, queue_pair_id,
+			qp_conf->nb_descriptors, socket_id, &vq);
+	if (ret < 0) {
+		PMD_INIT_LOG(ERR,
+			"virtio crypto data queue initialization failed\n");
+		return ret;
+	}
+
+	dev->data->queue_pairs[queue_pair_id] = vq;
+
+	return 0;
+}
+
+static int
+virtio_crypto_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id)
+{
+	struct virtqueue *vq
+		= (struct virtqueue *)dev->data->queue_pairs[queue_pair_id];
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (vq == NULL) {
+		PMD_DRV_LOG(DEBUG, "vq already freed");
+		return 0;
+	}
+
+	virtio_crypto_queue_release(vq);
+	return 0;
+}
+
+static inline int
+vtpci_with_feature(struct virtio_crypto_hw *hw, uint64_t bit)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return (hw->guest_features & (1ULL << bit)) != 0;
+}
+
+static int
+virtio_negotiate_features(struct virtio_crypto_hw *hw, uint64_t req_features)
+{
+	uint64_t host_features;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Prepare guest_features: feature that driver wants to support */
+	PMD_INIT_LOG(DEBUG, "guest_features before negotiate = %" PRIx64,
+		req_features);
+
+	/* Read device(host) feature bits */
+	host_features = VTPCI_OPS(hw)->get_features(hw);
+	PMD_INIT_LOG(DEBUG, "host_features before negotiate = %" PRIx64,
+		host_features);
+
+	/*
+	 * Negotiate features: Subset of device feature bits are written back
+	 * guest feature bits.
+	 */
+	hw->guest_features = req_features;
+	hw->guest_features = vtpci_negotiate_features_crypto(hw, host_features);
+	PMD_INIT_LOG(DEBUG, "features after negotiate = %" PRIx64,
+		hw->guest_features);
+
+	if (hw->modern) {
+		if (!vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) {
+			PMD_INIT_LOG(ERR,
+				"VIRTIO_F_VERSION_1 features is not enabled.");
+			return -1;
+		}
+		vtpci_set_status_crypto(hw, VIRTIO_CONFIG_STATUS_FEATURES_OK);
+		if (!(vtpci_get_status_crypto(hw) & VIRTIO_CONFIG_STATUS_FEATURES_OK)) {
+			PMD_INIT_LOG(ERR,
+				"failed to set FEATURES_OK status!");
+			return -1;
+		}
+	}
+
+	hw->req_guest_features = req_features;
+
+	return 0;
+}
+
+/* reset device and renegotiate features if needed */
+static int
+virtio_crypto_init_device(struct rte_cryptodev *cryptodev, uint64_t req_features)
+{
+	struct virtio_crypto_hw *hw = cryptodev->data->dev_private;
+	struct virtio_crypto_config *config;
+	struct virtio_crypto_config local_config;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Reset the device although not necessary at startup */
+	vtpci_reset_crypto(hw);
+
+	/* Tell the host we've noticed this device. */
+	vtpci_set_status_crypto(hw, VIRTIO_CONFIG_STATUS_ACK);
+
+	/* Tell the host we've known how to drive the device. */
+	vtpci_set_status_crypto(hw, VIRTIO_CONFIG_STATUS_DRIVER);
+	if (virtio_negotiate_features(hw, req_features) < 0)
+		return -1;
+
+	config = &local_config;
+	/* Get status of the device */
+	vtpci_read_dev_config_crypto(hw,
+		offsetof(struct virtio_crypto_config, status),
+		&config->status, sizeof(config->status));
+	if (config->status != VIRTIO_CRYPTO_S_HW_READY) {
+		PMD_DRV_LOG(ERR, "accelerator hardware is "
+				"not ready");
+		return -1;
+	}
+
+	/* Get number of data queues */
+	vtpci_read_dev_config_crypto(hw,
+		offsetof(struct virtio_crypto_config, max_dataqueues),
+		&config->max_dataqueues,
+		sizeof(config->max_dataqueues));
+	hw->max_dataqueues =
+		(config->max_dataqueues > VIRTIO_MAX_DATA_QUEUES) ?
+		VIRTIO_MAX_DATA_QUEUES : config->max_dataqueues;
+
+	PMD_INIT_LOG(DEBUG, "config->max_dataqueues=%d",
+			config->max_dataqueues);
+	PMD_INIT_LOG(DEBUG, "config->status=%d", config->status);
+
+	PMD_INIT_LOG(DEBUG, "hw->max_dataqueues=%d",
+		hw->max_dataqueues);
+
+	/* setup and start control queue only */
+	if (virtio_crypto_cq_setup(cryptodev, config->max_dataqueues) < 0) {
+		PMD_INIT_LOG(ERR, "control queue setup error");
+		return -1;
+	}
+	virtio_crypto_cq_start(cryptodev);
+
+	return 0;
+}
+
+/*
+ * This function is based on probe() function
+ * It returns 0 on success.
+ */
+static int
+crypto_virtio_create(const char *name, struct rte_pci_device *pci_dev,
+		struct rte_cryptodev_pmd_init_params *init_params)
+{
+	struct rte_cryptodev *cryptodev;
+	struct virtio_crypto_hw *hw;
+
+	PMD_INIT_FUNC_TRACE();
+
+	cryptodev = rte_cryptodev_pmd_create(name, &pci_dev->device,
+					init_params);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	cryptodev->driver_id = cryptodev_virtio_driver_id;
+	cryptodev->dev_ops = &virtio_crypto_dev_ops;
+
+	cryptodev->enqueue_burst = virtio_crypto_pkt_tx_burst;
+	cryptodev->dequeue_burst = virtio_crypto_pkt_rx_burst;
+
+	cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+		RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING;
+
+	hw = cryptodev->data->dev_private;
+	hw->dev_id = cryptodev->data->dev_id;
+
+	PMD_INIT_LOG(DEBUG, "dev %d vendorID=0x%x deviceID=0x%x",
+		cryptodev->data->dev_id, pci_dev->id.vendor_id,
+		pci_dev->id.device_id);
+
+	/* init legacy or modern device */
+	if (vtpci_init_crypto(pci_dev, hw))
+		return -1;
+
+	/* reset device and negotiate default features */
+	if (virtio_crypto_init_device(cryptodev,
+			VIRTIO_CRYPTO_PMD_GUEST_FEATURES))
+		return -1;
+
+	return 0;
+}
+
+static int
+virtio_crypto_dev_uninit(struct rte_cryptodev *cryptodev)
+{
+	struct virtio_crypto_hw *hw = cryptodev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (rte_eal_process_type() == RTE_PROC_SECONDARY)
+		return -EPERM;
+
+	if (cryptodev->data->dev_started) {
+		virtio_crypto_dev_stop(cryptodev);
+		virtio_crypto_dev_close(cryptodev);
+	}
+
+	cryptodev->dev_ops = NULL;
+	cryptodev->enqueue_burst = NULL;
+	cryptodev->dequeue_burst = NULL;
+
+	/* release control queue */
+	virtio_crypto_queue_release(hw->cvq);
+
+	rte_free(cryptodev->data);
+	cryptodev->data = NULL;
+
+	PMD_DRV_LOG(INFO, "dev_uninit completed");
+
+	return 0;
+}
+
+static int
+virtio_crypto_dev_configure(struct rte_cryptodev *dev __rte_unused,
+	struct rte_cryptodev_config *config __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return 0;
+}
+
+static void
+virtio_crypto_dev_stop(struct rte_cryptodev *dev)
+{
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+	PMD_DRV_LOG(DEBUG, "virtio_dev_stop");
+
+	vtpci_reset_crypto(hw);
+
+	virtio_crypto_dev_free_mbufs(dev);
+	virtio_crypto_free_queues(dev);
+
+	dev->data->dev_started = 0;
+}
+
+static int
+virtio_crypto_dev_start(struct rte_cryptodev *dev)
+{
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	if (dev->data->dev_started)
+		return 0;
+
+	/* Do final configuration before queue engine starts */
+	virtio_crypto_dq_start(dev);
+	vtpci_reinit_complete_crypto(hw);
+
+	dev->data->dev_started = 1;
+
+	return 0;
+}
+
+static void virtio_crypto_dev_free_mbufs(struct rte_cryptodev *dev)
+{
+	uint32_t i;
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	for (i = 0; i < hw->max_dataqueues; i++) {
+		PMD_INIT_LOG(DEBUG, "Before freeing dataq[%d] used "
+			"and unused buf", i);
+		VIRTQUEUE_DUMP((struct virtqueue *)
+			dev->data->queue_pairs[i]);
+
+		PMD_INIT_LOG(DEBUG, "queue_pairs[%d]=%p",
+				i, dev->data->queue_pairs[i]);
+
+		virtqueue_crypto_detatch_unused(dev->data->queue_pairs[i]);
+
+		PMD_INIT_LOG(DEBUG, "After freeing dataq[%d] used and "
+					"unused buf", i);
+		VIRTQUEUE_DUMP(
+			(struct virtqueue *)dev->data->queue_pairs[i]);
+	}
+}
+
+static unsigned virtio_crypto_sym_get_session_private_size(
+		struct rte_cryptodev *dev __rte_unused)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	return RTE_ALIGN_CEIL(sizeof(struct virtio_crypto_session), 8);
+}
+
+static int virtio_crypto_check_sym_session_paras(
+		struct rte_cryptodev *dev)
+{
+	struct virtio_crypto_hw *hw;
+	struct virtqueue *vq;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (unlikely(dev == NULL)) {
+		PMD_SESSION_LOG(ERR, "dev is NULL");
+		return -1;
+	}
+	if (unlikely(dev->data == NULL)) {
+		PMD_SESSION_LOG(ERR, "dev->data is NULL");
+		return -1;
+	}
+	hw = dev->data->dev_private;
+	if (unlikely(hw == NULL)) {
+		PMD_SESSION_LOG(ERR, "hw is NULL");
+		return -1;
+	}
+	vq = hw->cvq;
+	if (unlikely(vq == NULL)) {
+		PMD_SESSION_LOG(ERR, "vq is NULL");
+		return -1;
+	}
+
+	return 0;
+}
+
+static int virtio_crypto_check_sym_clear_session_paras(
+		struct rte_cryptodev *dev,
+		struct rte_cryptodev_sym_session *sess)
+{
+	PMD_INIT_FUNC_TRACE();
+
+	if (sess == NULL) {
+		PMD_SESSION_LOG(ERR, "vq is NULL");
+		return -1;
+	}
+
+	return virtio_crypto_check_sym_session_paras(dev);
+}
+
+#define NUM_ENTRY_VIRTIO_CRYPTO_SYM_CLEAR_SESSION 2
+
+static void  virtio_crypto_sym_clear_session(
+		struct rte_cryptodev *dev,
+		struct rte_cryptodev_sym_session *sess)
+{
+	struct virtio_crypto_hw *hw;
+	struct virtqueue *vq;
+	struct virtio_crypto_session *session;
+	struct virtio_crypto_op_ctrl_req *ctrl;
+	struct vring_desc *desc;
+	uint8_t *status;
+	uint8_t needed = 1;
+	uint32_t head;
+	uint8_t *malloc_addr;
+	uint64_t malloc_phys_addr;
+	uint8_t status_len = sizeof(struct virtio_crypto_inhdr);
+	uint32_t op_ctrl_req_len = sizeof(struct virtio_crypto_op_ctrl_req);
+	uint32_t desc_offset_len = op_ctrl_req_len + status_len;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (virtio_crypto_check_sym_clear_session_paras(dev, sess) < 0)
+		return;
+
+	hw = dev->data->dev_private;
+	vq = hw->cvq;
+	session = (struct virtio_crypto_session *)get_session_private_data(
+		sess, cryptodev_virtio_driver_id);
+
+	if (session == NULL) {
+		PMD_SESSION_LOG(ERR, "Invalid session parameter");
+	}
+
+	PMD_SESSION_LOG(INFO, "vq->vq_desc_head_idx = %d, "
+			"vq = %p", vq->vq_desc_head_idx, vq);
+
+	if (vq->vq_free_cnt < needed) {
+		PMD_SESSION_LOG(ERR,
+				"vq->vq_free_cnt = %d is less than %d, "
+				"not enough", vq->vq_free_cnt, needed);
+		return;
+	}
+
+	/*
+	* malloc memory to store information of ctrl request op,
+	* return status and desc vring
+	*/
+	malloc_addr = rte_malloc(NULL, op_ctrl_req_len + status_len
+		+ NUM_ENTRY_VIRTIO_CRYPTO_SYM_CLEAR_SESSION
+		* sizeof(struct vring_desc), RTE_CACHE_LINE_SIZE);
+	if (malloc_addr == NULL) {
+		PMD_SESSION_LOG(ERR, "not enough heap room");
+		return;
+	}
+	malloc_phys_addr = rte_malloc_virt2phy(malloc_addr);
+
+	/* assign ctrl request op part */
+	ctrl = (struct virtio_crypto_op_ctrl_req *)malloc_addr;
+	ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_DESTROY_SESSION;
+	/* Set the default dataqueue id to 0 */
+	ctrl->header.queue_id = 0;
+	ctrl->u.destroy_session.session_id = session->session_id;
+
+	/* assign status part */
+	status = &(((struct virtio_crypto_inhdr *)
+				((uint8_t *)malloc_addr + op_ctrl_req_len))->status);
+	*status = VIRTIO_CRYPTO_ERR;
+
+	/* assign indirect desc vring part */
+	desc = (struct vring_desc *)((uint8_t *)malloc_addr
+		+ desc_offset_len);
+
+	/* ctrl request part */
+	desc[0].addr = malloc_phys_addr;
+	desc[0].len = op_ctrl_req_len;
+	desc[0].flags = VRING_DESC_F_NEXT;
+	desc[0].next = 1;
+
+	/* status part */
+	desc[1].addr = malloc_phys_addr + op_ctrl_req_len;
+	desc[1].len = status_len;
+	desc[1].flags = VRING_DESC_F_WRITE;
+
+	/* use only a single buffer */
+	head = vq->vq_desc_head_idx;
+	vq->vq_ring.desc[head].flags = VRING_DESC_F_INDIRECT;
+	vq->vq_ring.desc[head].addr = malloc_phys_addr + desc_offset_len;
+	vq->vq_ring.desc[head].len
+		= NUM_ENTRY_VIRTIO_CRYPTO_SYM_CLEAR_SESSION
+		* sizeof(struct vring_desc);
+
+	vq->vq_free_cnt -= needed;
+
+	vq->vq_desc_head_idx = vq->vq_ring.desc[head].next;
+
+	vq_crypto_update_avail_ring(vq, head);
+	vq_crypto_update_avail_idx(vq);
+
+	PMD_INIT_LOG(DEBUG, "vq->vq_queue_index = %d", vq->vq_queue_index);
+
+	virtqueue_crypto_notify(vq);
+
+	rte_rmb();
+	while (vq->vq_used_cons_idx == vq->vq_ring.used->idx) {
+		rte_rmb();
+		usleep(100);
+	}
+
+	while (vq->vq_used_cons_idx != vq->vq_ring.used->idx) {
+		uint32_t idx, desc_idx, used_idx;
+		struct vring_used_elem *uep;
+
+		used_idx = (uint32_t)(vq->vq_used_cons_idx
+				& (vq->vq_nentries - 1));
+		uep = &vq->vq_ring.used->ring[used_idx];
+		idx = (uint32_t) uep->id;
+		desc_idx = idx;
+		while (vq->vq_ring.desc[desc_idx].flags
+				& VRING_DESC_F_NEXT) {
+			desc_idx = vq->vq_ring.desc[desc_idx].next;
+			vq->vq_free_cnt++;
+		}
+
+		vq->vq_ring.desc[desc_idx].next = vq->vq_desc_head_idx;
+		vq->vq_desc_head_idx = idx;
+		vq->vq_used_cons_idx++;
+		vq->vq_free_cnt++;
+	}
+
+	if (*status != VIRTIO_CRYPTO_OK) {
+		PMD_SESSION_LOG(ERR, "Close session failed "
+				"status=%"PRIu32", session_id=%"PRIu64"",
+				*status, session->session_id);
+		rte_free(malloc_addr);
+		return;
+	}
+
+	PMD_INIT_LOG(DEBUG, "vq->vq_free_cnt=%d\nvq->vq_desc_head_idx=%d",
+			vq->vq_free_cnt, vq->vq_desc_head_idx);
+
+	PMD_SESSION_LOG(INFO, "Close session successfully "
+			"session_id=%"PRIu64"", session->session_id);
+
+	if (sess) {
+		memset(sess, 0, sizeof(struct virtio_crypto_session));
+		rte_free(malloc_addr);
+	}
+}
+
+static struct rte_crypto_cipher_xform *
+virtio_crypto_get_cipher_xform(struct rte_crypto_sym_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+			return &xform->cipher;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+static struct rte_crypto_auth_xform *
+virtio_crypto_get_auth_xform(struct rte_crypto_sym_xform *xform)
+{
+	do {
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+			return &xform->auth;
+
+		xform = xform->next;
+	} while (xform);
+
+	return NULL;
+}
+
+/** Get xform chain order */
+static int
+virtio_crypto_get_chain_order(struct rte_crypto_sym_xform *xform)
+{
+	if (xform == NULL)
+		return -1;
+
+	/* Cipher Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+			xform->next == NULL)
+		return VIRTIO_CRYPTO_CMD_CIPHER;
+
+	/* Authentication Only */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+			xform->next == NULL)
+		return VIRTIO_CRYPTO_CMD_AUTH;
+
+	/* Authenticate then Cipher */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
+			xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER)
+		return VIRTIO_CRYPTO_CMD_HASH_CIPHER;
+
+	/* Cipher then Authenticate */
+	if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
+			xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+		return VIRTIO_CRYPTO_CMD_CIPHER_HASH;
+
+	return -1;
+}
+
+static int virtio_crypto_sym_pad_cipher_param(struct virtio_crypto_cipher_session_para *para,
+		struct rte_crypto_cipher_xform *cipher_xform)
+{
+	switch (cipher_xform->algo) {
+	case RTE_CRYPTO_CIPHER_NULL:
+		para->algo = VIRTIO_CRYPTO_NO_CIPHER;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CBC:
+		para->algo = VIRTIO_CRYPTO_CIPHER_3DES_CBC;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_CTR:
+		para->algo = VIRTIO_CRYPTO_CIPHER_3DES_CTR;
+		break;
+	case RTE_CRYPTO_CIPHER_3DES_ECB:
+		para->algo = VIRTIO_CRYPTO_CIPHER_3DES_ECB;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		para->algo = VIRTIO_CRYPTO_CIPHER_AES_CBC;
+		break;
+	case  RTE_CRYPTO_CIPHER_AES_CTR:
+		para->algo = VIRTIO_CRYPTO_CIPHER_AES_CTR;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+		para->algo = VIRTIO_CRYPTO_CIPHER_AES_ECB;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_F8:
+		para->algo = VIRTIO_CRYPTO_CIPHER_AES_F8;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+		para->algo = VIRTIO_CRYPTO_CIPHER_AES_XTS;
+		break;
+	case RTE_CRYPTO_CIPHER_ARC4:
+		para->algo = VIRTIO_CRYPTO_CIPHER_ARC4;
+		break;
+	case RTE_CRYPTO_CIPHER_KASUMI_F8:
+		para->algo = VIRTIO_CRYPTO_CIPHER_KASUMI_F8;
+		break;
+	case RTE_CRYPTO_CIPHER_SNOW3G_UEA2:
+		para->algo = VIRTIO_CRYPTO_CIPHER_SNOW3G_UEA2;
+		break;
+	case RTE_CRYPTO_CIPHER_ZUC_EEA3:
+		para->algo = VIRTIO_CRYPTO_CIPHER_ZUC_EEA3;
+		break;
+	case RTE_CRYPTO_CIPHER_DES_CBC:
+		para->algo = VIRTIO_CRYPTO_CIPHER_DES_CBC;
+		break;
+	default:
+		PMD_SESSION_LOG(ERR, "Crypto: Unsupported Cipher alg %u",
+				cipher_xform->algo);
+		return -1;
+	}
+
+	para->keylen = cipher_xform->key.length;
+	switch (cipher_xform->op) {
+	case RTE_CRYPTO_CIPHER_OP_ENCRYPT:
+		para->op = VIRTIO_CRYPTO_OP_ENCRYPT;
+		break;
+	case RTE_CRYPTO_CIPHER_OP_DECRYPT:
+		para->op = VIRTIO_CRYPTO_OP_DECRYPT;
+		break;
+	default:
+		PMD_SESSION_LOG(ERR, "Unsupported cipher operation parameter");
+		return -1;
+	}
+
+	return 0;
+}
+
+static int virtio_crypto_sym_pad_auth_param(
+		struct virtio_crypto_op_ctrl_req *ctrl,
+		struct rte_crypto_auth_xform *auth_xform)
+{
+	uint32_t *algo;
+
+	switch (ctrl->u.sym_create_session.u.chain.para.hash_mode) {
+	case VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN:
+		algo = &ctrl->u.sym_create_session.u.chain.para.u.hash_param.algo;
+		break;
+	case VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH:
+		algo = &ctrl->u.sym_create_session.u.chain.para.u.mac_param.algo;
+		break;
+	default:
+		PMD_SESSION_LOG(ERR, "Unsupported hash mode %u specified",
+			ctrl->u.sym_create_session.u.chain.para.hash_mode);
+		return -1;
+	}
+
+	switch (auth_xform->algo) {
+	case RTE_CRYPTO_AUTH_NULL:
+		*algo = VIRTIO_CRYPTO_NO_MAC;
+		break;
+	case RTE_CRYPTO_AUTH_AES_CBC_MAC:
+		*algo = VIRTIO_CRYPTO_MAC_CBCMAC_AES;
+		break;
+	case RTE_CRYPTO_AUTH_AES_CMAC:
+		*algo = VIRTIO_CRYPTO_MAC_CMAC_AES;
+		break;
+	case RTE_CRYPTO_AUTH_AES_GMAC:
+		*algo = VIRTIO_CRYPTO_MAC_GMAC_AES;
+		break;
+	case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
+		*algo = VIRTIO_CRYPTO_MAC_XCBC_AES;
+		break;
+	case RTE_CRYPTO_AUTH_KASUMI_F9:
+		*algo = VIRTIO_CRYPTO_MAC_KASUMI_F9;
+		break;
+	case RTE_CRYPTO_AUTH_MD5:
+		*algo = VIRTIO_CRYPTO_HASH_MD5;
+		break;
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		*algo = VIRTIO_CRYPTO_MAC_HMAC_MD5;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1:
+		*algo = VIRTIO_CRYPTO_HASH_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		*algo = VIRTIO_CRYPTO_MAC_HMAC_SHA1;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224:
+		*algo = VIRTIO_CRYPTO_HASH_SHA_224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		*algo = VIRTIO_CRYPTO_MAC_HMAC_SHA_224;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256:
+		*algo = VIRTIO_CRYPTO_HASH_SHA_256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		*algo = VIRTIO_CRYPTO_MAC_HMAC_SHA_256;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384:
+		*algo = VIRTIO_CRYPTO_HASH_SHA_384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		*algo = VIRTIO_CRYPTO_MAC_HMAC_SHA_384;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512:
+		*algo = VIRTIO_CRYPTO_HASH_SHA_512;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		*algo = VIRTIO_CRYPTO_MAC_HMAC_SHA_512;
+		break;
+	case RTE_CRYPTO_AUTH_SNOW3G_UIA2:
+		*algo = VIRTIO_CRYPTO_MAC_SNOW3G_UIA2;
+		break;
+	default:
+		PMD_SESSION_LOG(ERR,
+			"Crypto: Undefined Hash algo %u specified",
+			auth_xform->algo);
+		return -1;
+	}
+
+	return 0;
+}
+
+static int virtio_crypto_sym_pad_op_ctrl_req(
+		struct virtio_crypto_op_ctrl_req *ctrl,
+		struct rte_crypto_sym_xform *xform, bool is_chainned,
+		uint8_t **cipher_key_data, uint8_t **auth_key_data,
+		struct virtio_crypto_session *session)
+{
+	int ret;
+	struct rte_crypto_auth_xform *auth_xform = NULL;
+	struct rte_crypto_cipher_xform *cipher_xform = NULL;
+
+	if (ctrl == NULL) {
+		PMD_SESSION_LOG(ERR, "ctrl request is NULL");
+		return -1;
+	}
+
+	if (xform == NULL) {
+		PMD_SESSION_LOG(ERR, "xform is NULL");
+		return -1;
+	}
+
+	/* Get cipher xform from crypto xform chain */
+	cipher_xform = virtio_crypto_get_cipher_xform(xform);
+	if (cipher_xform) {
+		if (is_chainned)
+			ret = virtio_crypto_sym_pad_cipher_param(
+				&ctrl->u.sym_create_session.u.chain.para
+						.cipher_param, cipher_xform);
+		else
+			ret = virtio_crypto_sym_pad_cipher_param(
+				&ctrl->u.sym_create_session.u.cipher.para,
+				cipher_xform);
+
+		if (ret < 0) {
+			PMD_SESSION_LOG(ERR,
+				"pad cipher parameter failed");
+			return -1;
+		}
+
+		*cipher_key_data = cipher_xform->key.data;
+
+		session->iv.offset = cipher_xform->iv.offset;
+		session->iv.length = cipher_xform->iv.length;
+	}
+
+	/* Get auth xform from crypto xform chain */
+	auth_xform = virtio_crypto_get_auth_xform(xform);
+	if (auth_xform) {
+		/* FIXME: support VIRTIO_CRYPTO_SYM_HASH_MODE_NESTED */
+		if (auth_xform->key.length) {
+			ctrl->u.sym_create_session.u.chain.para.hash_mode
+				= VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH;
+			ctrl->u.sym_create_session.u.chain.para.u.mac_param.
+				auth_key_len = (uint32_t)auth_xform->key.length;
+			ctrl->u.sym_create_session.u.chain.para.u.mac_param.
+				hash_result_len = auth_xform->digest_length;
+
+			*auth_key_data = auth_xform->key.data;
+		} else {
+			ctrl->u.sym_create_session.u.chain.para.hash_mode
+				= VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN;
+			ctrl->u.sym_create_session.u.chain.para.u.hash_param.
+				hash_result_len = auth_xform->digest_length;
+		}
+
+		ret = virtio_crypto_sym_pad_auth_param(ctrl, auth_xform);
+		if (ret < 0) {
+			PMD_SESSION_LOG(ERR, "pad auth parameter failed");
+			return -1;
+		}
+	}
+
+	return 0;
+}
+
+static int virtio_crypto_check_sym_configure_session_paras(
+		struct rte_cryptodev *dev,
+		struct rte_crypto_sym_xform *xform,
+		struct rte_cryptodev_sym_session *sym_sess,
+		struct rte_mempool *mempool)
+{
+	if (unlikely(xform == NULL) || unlikely(sym_sess == NULL) ||
+		unlikely(mempool == NULL)) {
+		PMD_SESSION_LOG(ERR, "NULL pointer");
+		return -1;
+	}
+
+	if (virtio_crypto_check_sym_session_paras(dev) < 0)
+		return -1;
+
+	return 0;
+}
+
+static int virtio_crypto_sym_configure_session(
+		struct rte_cryptodev *dev,
+		struct rte_crypto_sym_xform *xform,
+		struct rte_cryptodev_sym_session *sess,
+		struct rte_mempool *mempool)
+{
+	void *session_private;
+	int ret;
+	uint8_t *cipher_key_data = NULL;
+	uint8_t *auth_key_data = NULL;
+	struct virtio_crypto_hw *hw;
+	struct virtqueue *vq;
+	struct virtio_crypto_session *session;
+	struct virtio_crypto_op_ctrl_req *ctrl;
+	enum virtio_crypto_cmd_id cmd_id;
+
+	PMD_INIT_FUNC_TRACE();
+
+	ret = virtio_crypto_check_sym_configure_session_paras(dev, xform,
+			sess, mempool);
+	if (ret < 0) {
+		PMD_SESSION_LOG(ERR, "Invalid parameters");
+		return ret;
+	}
+
+	if (rte_mempool_get(mempool, &session_private)) {
+		PMD_SESSION_LOG(ERR,
+			"Couldn't get object from session mempool");
+		return -ENOMEM;
+	}
+
+	session = (struct virtio_crypto_session *)session_private;
+	ctrl = &session->ctrl;
+	memset(ctrl, 0, sizeof(struct virtio_crypto_op_ctrl_req));
+	ctrl->header.opcode = VIRTIO_CRYPTO_CIPHER_CREATE_SESSION;
+	/* FIXME: support multiqueue */
+	ctrl->header.queue_id = 0;
+
+	hw = dev->data->dev_private;
+	vq = hw->cvq;
+
+	cmd_id = virtio_crypto_get_chain_order(xform);
+	if (cmd_id == VIRTIO_CRYPTO_CMD_CIPHER_HASH)
+		ctrl->u.sym_create_session.u.chain.para.alg_chain_order
+			= VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_CIPHER_THEN_HASH;
+	if (cmd_id == VIRTIO_CRYPTO_CMD_HASH_CIPHER)
+		ctrl->u.sym_create_session.u.chain.para.alg_chain_order
+			= VIRTIO_CRYPTO_SYM_ALG_CHAIN_ORDER_HASH_THEN_CIPHER;
+
+	switch (cmd_id) {
+	case VIRTIO_CRYPTO_CMD_CIPHER_HASH:
+	case VIRTIO_CRYPTO_CMD_HASH_CIPHER:
+		ctrl->u.sym_create_session.op_type
+			= VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING;
+
+		ret = virtio_crypto_sym_pad_op_ctrl_req(ctrl,
+			xform, true, &cipher_key_data, &auth_key_data, session);
+		if (ret < 0) {
+			PMD_SESSION_LOG(ERR,
+				"padding sym op ctrl req failed");
+			goto error_out;
+		}
+		ret = virtio_crypto_send_command(vq, ctrl,
+			cipher_key_data, auth_key_data, session);
+		if (ret < 0) {
+			PMD_SESSION_LOG(ERR,
+				"create session failed: %d", ret);
+			goto error_out;
+		}
+		break;
+	case VIRTIO_CRYPTO_CMD_CIPHER:
+		ctrl->u.sym_create_session.op_type
+			= VIRTIO_CRYPTO_SYM_OP_CIPHER;
+		ret = virtio_crypto_sym_pad_op_ctrl_req(ctrl,
+			xform, false, &cipher_key_data, &auth_key_data, session);
+		if (ret < 0) {
+			PMD_SESSION_LOG(ERR,
+				"padding sym op ctrl req failed");
+			goto error_out;
+		}
+		ret = virtio_crypto_send_command(vq, ctrl,
+			cipher_key_data, NULL, session);
+		if (ret < 0) {
+			PMD_SESSION_LOG(ERR,
+				"create session failed: %d", ret);
+			goto error_out;
+		}
+		break;
+	default:
+		PMD_SESSION_LOG(ERR,
+			"Unsupported operation chain order parameter");
+		goto error_out;
+	}
+
+	set_session_private_data(sess, dev->driver_id,
+		session_private);
+
+	return 0;
+
+error_out:
+	return -1;
+}
+
+static void
+virtio_crypto_dev_info_get(struct rte_cryptodev *dev,
+		struct rte_cryptodev_info *info)
+{
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	if (info != NULL) {
+		info->driver_id = cryptodev_virtio_driver_id;
+		info->pci_dev = RTE_DEV_TO_PCI(dev->device);
+		info->feature_flags = dev->feature_flags;
+		info->max_nb_queue_pairs = hw->max_dataqueues;
+		info->sym.max_nb_sessions = RTE_VIRTIO_CRYPTO_PMD_MAX_NB_SESSIONS;
+	}
+}
+
+static int crypto_virtio_pci_probe(
+	struct rte_pci_driver *pci_drv __rte_unused,
+	struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev_pmd_init_params init_params = {
+		.name = "",
+		.socket_id = rte_socket_id(),
+		.private_data_size = sizeof(struct virtio_crypto_hw),
+		.max_nb_sessions = RTE_VIRTIO_CRYPTO_PMD_MAX_NB_SESSIONS
+	};
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	PMD_DRV_LOG(DEBUG, "Found Crypto device at %02x:%02x.%x",
+			pci_dev->addr.bus,
+			pci_dev->addr.devid,
+			pci_dev->addr.function);
+
+	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+	return crypto_virtio_create(name, pci_dev, &init_params);
+}
+
+static int crypto_virtio_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct rte_cryptodev *cryptodev;
+	char cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	rte_pci_device_name(&pci_dev->addr, cryptodev_name,
+			sizeof(cryptodev_name));
+
+	cryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);
+	if (cryptodev == NULL)
+		return -ENODEV;
+
+	return virtio_crypto_dev_uninit(cryptodev);
+}
+
+static struct rte_pci_driver rte_virtio_crypto_driver = {
+	.id_table = pci_id_virtio_crypto_map,
+	.drv_flags = 0,
+	.probe = crypto_virtio_pci_probe,
+	.remove = crypto_virtio_pci_remove
+};
+
+static struct cryptodev_driver virtio_crypto_drv;
+
+RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_VIRTIO_SYM_PMD, rte_virtio_crypto_driver);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(virtio_crypto_drv, rte_virtio_crypto_driver,
+		cryptodev_virtio_driver_id);
diff --git a/drivers/crypto/virtio/virtio_cryptodev.h b/drivers/crypto/virtio/virtio_cryptodev.h
new file mode 100644
index 0000000..5d22ab8
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_cryptodev.h
@@ -0,0 +1,87 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _VIRTIO_CRYPTODEV_H_
+#define _VIRTIO_CRYPTODEV_H_
+
+#include "virtio_pci.h"
+#include "virtio_crypto.h"
+
+#ifndef PAGE_SIZE
+#define PAGE_SIZE 4096
+#endif
+
+#define CRYPTODEV_NAME_VIRTIO_SYM_PMD  crypto_virtio
+
+#define VIRTIO_MAX_DATA_QUEUES 128
+
+/* Features desired/implemented by this driver. */
+#define VIRTIO_CRYPTO_PMD_GUEST_FEATURES (1ULL << VIRTIO_F_VERSION_1)
+
+extern uint8_t cryptodev_virtio_driver_id;
+
+enum virtio_crypto_cmd_id {
+	VIRTIO_CRYPTO_CMD_CIPHER = 0,
+	VIRTIO_CRYPTO_CMD_AUTH = 1,
+	VIRTIO_CRYPTO_CMD_CIPHER_HASH = 2,
+	VIRTIO_CRYPTO_CMD_HASH_CIPHER = 3
+};
+
+/*
+ * Control queue function prototype
+ */
+void virtio_crypto_cq_start(struct rte_cryptodev *dev);
+
+/*
+ * Data queue function prototype
+ */
+void virtio_crypto_dq_start(struct rte_cryptodev *dev);
+
+int virtio_crypto_queue_setup(struct rte_cryptodev *dev,
+		int queue_type,
+		uint16_t vtpci_queue_idx,
+		uint16_t nb_desc,
+		int socket_id,
+		struct virtqueue **pvq);
+
+void virtio_crypto_queue_release(struct virtqueue *vq);
+
+uint16_t virtio_crypto_pkt_tx_burst(void *tx_queue,
+		struct rte_crypto_op **tx_pkts,
+		uint16_t nb_pkts);
+
+uint16_t virtio_crypto_pkt_rx_burst(void *tx_queue,
+		struct rte_crypto_op **tx_pkts,
+		uint16_t nb_pkts);
+
+#endif /* _VIRTIO_CRYPTODEV_H_ */
diff --git a/drivers/crypto/virtio/virtio_logs.h b/drivers/crypto/virtio/virtio_logs.h
new file mode 100644
index 0000000..41c663d
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_logs.h
@@ -0,0 +1,76 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _VIRTIO_LOGS_H_
+#define _VIRTIO_LOGS_H_
+
+#include <rte_log.h>
+
+#ifdef RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_INIT
+#define PMD_INIT_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): " fmt "\n", __func__, ## args)
+#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>")
+#else
+#define PMD_INIT_LOG(level, fmt, args...) do { } while (0)
+#define PMD_INIT_FUNC_TRACE() do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_SESSION
+#define PMD_SESSION_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() session: " fmt "\n", __func__, ## args)
+#else
+#define PMD_SESSION_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_RX
+#define PMD_RX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() rx: " fmt "\n", __func__, ## args)
+#else
+#define PMD_RX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_TX
+#define PMD_TX_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s() tx: " fmt "\n", __func__, ## args)
+#else
+#define PMD_TX_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#ifdef RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_DRIVER
+#define PMD_DRV_LOG(level, fmt, args...) \
+	RTE_LOG(level, PMD, "%s(): driver " fmt "\n", __func__, ## args)
+#else
+#define PMD_DRV_LOG(level, fmt, args...) do { } while (0)
+#endif
+
+#endif /* _VIRTIO_LOGS_H_ */
\ No newline at end of file
diff --git a/drivers/crypto/virtio/virtio_pci.c b/drivers/crypto/virtio/virtio_pci.c
new file mode 100644
index 0000000..d594ea5
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_pci.c
@@ -0,0 +1,714 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+
+#ifdef RTE_EXEC_ENV_LINUXAPP
+ #include <dirent.h>
+ #include <fcntl.h>
+#endif
+
+#include <rte_io.h>
+#include <rte_bus.h>
+#include <rte_dev.h>
+#include <rte_devargs.h>
+
+#include "virtio_pci.h"
+#include "virtio_logs.h"
+#include "virtqueue.h"
+
+/*
+ * Following macros are derived from linux/pci_regs.h, however,
+ * we can't simply include that header here, as there is no such
+ * file for non-Linux platform.
+ */
+#define PCI_CAPABILITY_LIST	0x34
+#define PCI_CAP_ID_VNDR		0x09
+#define PCI_CAP_ID_MSIX		0x11
+
+/*
+ * The remaining space is defined by each driver as the per-driver
+ * configuration space.
+ */
+#define VIRTIO_PCI_CONFIG(hw) (((hw)->use_msix) ? 24 : 20)
+
+static inline int
+check_vq_phys_addr_ok(struct virtqueue *vq)
+{
+	/* Virtio PCI device VIRTIO_PCI_QUEUE_PF register is 32bit,
+	 * and only accepts 32 bit page frame number.
+	 * Check if the allocated physical memory exceeds 16TB.
+	 */
+	if ((vq->vq_ring_mem + vq->vq_ring_size - 1) >>
+			(VIRTIO_PCI_QUEUE_ADDR_SHIFT + 32)) {
+		PMD_INIT_LOG(ERR, "vring address shouldn't be above 16TB!");
+		return 0;
+	}
+
+	return 1;
+}
+
+/*
+ * Since we are in legacy mode:
+ * http://ozlabs.org/~rusty/virtio-spec/virtio-0.9.5.pdf
+ *
+ * "Note that this is possible because while the virtio header is PCI (i.e.
+ * little) endian, the device-specific region is encoded in the native endian of
+ * the guest (where such distinction is applicable)."
+ *
+ * For powerpc which supports both, qemu supposes that cpu is big endian and
+ * enforces this for the virtio-net stuff.
+ */
+static void
+legacy_read_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+		       void *dst, int length)
+{
+#ifdef RTE_ARCH_PPC_64
+	int size;
+
+	while (length > 0) {
+		if (length >= 4) {
+			size = 4;
+			rte_pci_ioport_read(VTPCI_IO(hw), dst, size,
+				VIRTIO_PCI_CONFIG(hw) + offset);
+			*(uint32_t *)dst = rte_be_to_cpu_32(*(uint32_t *)dst);
+		} else if (length >= 2) {
+			size = 2;
+			rte_pci_ioport_read(VTPCI_IO(hw), dst, size,
+				VIRTIO_PCI_CONFIG(hw) + offset);
+			*(uint16_t *)dst = rte_be_to_cpu_16(*(uint16_t *)dst);
+		} else {
+			size = 1;
+			rte_pci_ioport_read(VTPCI_IO(hw), dst, size,
+				VIRTIO_PCI_CONFIG(hw) + offset);
+		}
+
+		dst = (char *)dst + size;
+		offset += size;
+		length -= size;
+	}
+#else
+	rte_pci_ioport_read(VTPCI_IO(hw), dst, length,
+		VIRTIO_PCI_CONFIG(hw) + offset);
+#endif
+}
+
+static void
+legacy_write_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+			const void *src, int length)
+{
+#ifdef RTE_ARCH_PPC_64
+	union {
+		uint32_t u32;
+		uint16_t u16;
+	} tmp;
+	int size;
+
+	while (length > 0) {
+		if (length >= 4) {
+			size = 4;
+			tmp.u32 = rte_cpu_to_be_32(*(const uint32_t *)src);
+			rte_pci_ioport_write(VTPCI_IO(hw), &tmp.u32, size,
+				VIRTIO_PCI_CONFIG(hw) + offset);
+		} else if (length >= 2) {
+			size = 2;
+			tmp.u16 = rte_cpu_to_be_16(*(const uint16_t *)src);
+			rte_pci_ioport_write(VTPCI_IO(hw), &tmp.u16, size,
+				VIRTIO_PCI_CONFIG(hw) + offset);
+		} else {
+			size = 1;
+			rte_pci_ioport_write(VTPCI_IO(hw), src, size,
+				VIRTIO_PCI_CONFIG(hw) + offset);
+		}
+
+		src = (const char *)src + size;
+		offset += size;
+		length -= size;
+	}
+#else
+	rte_pci_ioport_write(VTPCI_IO(hw), src, length,
+		VIRTIO_PCI_CONFIG(hw) + offset);
+#endif
+}
+
+static uint64_t
+legacy_get_features(struct virtio_crypto_hw *hw)
+{
+	uint32_t dst;
+
+	rte_pci_ioport_read(VTPCI_IO(hw), &dst, 4, VIRTIO_PCI_HOST_FEATURES);
+	return dst;
+}
+
+static void
+legacy_set_features(struct virtio_crypto_hw *hw, uint64_t features)
+{
+	if ((features >> 32) != 0) {
+		PMD_DRV_LOG(ERR,
+			"only 32 bit features are allowed for legacy virtio!");
+		return;
+	}
+	rte_pci_ioport_write(VTPCI_IO(hw), &features, 4,
+		VIRTIO_PCI_GUEST_FEATURES);
+}
+
+static uint8_t
+legacy_get_status(struct virtio_crypto_hw *hw)
+{
+	uint8_t dst;
+
+	rte_pci_ioport_read(VTPCI_IO(hw), &dst, 1, VIRTIO_PCI_STATUS);
+	return dst;
+}
+
+static void
+legacy_set_status(struct virtio_crypto_hw *hw, uint8_t status)
+{
+	rte_pci_ioport_write(VTPCI_IO(hw), &status, 1, VIRTIO_PCI_STATUS);
+}
+
+static void
+legacy_reset(struct virtio_crypto_hw *hw)
+{
+	legacy_set_status(hw, VIRTIO_CONFIG_STATUS_RESET);
+}
+
+static uint8_t
+legacy_get_isr(struct virtio_crypto_hw *hw)
+{
+	uint8_t dst;
+
+	rte_pci_ioport_read(VTPCI_IO(hw), &dst, 1, VIRTIO_PCI_ISR);
+	return dst;
+}
+
+/* Enable one vector (0) for Link State Intrerrupt */
+static uint16_t
+legacy_set_config_irq(struct virtio_crypto_hw *hw, uint16_t vec)
+{
+	uint16_t dst;
+
+	rte_pci_ioport_write(VTPCI_IO(hw), &vec, 2, VIRTIO_MSI_CONFIG_VECTOR);
+	rte_pci_ioport_read(VTPCI_IO(hw), &dst, 2, VIRTIO_MSI_CONFIG_VECTOR);
+	return dst;
+}
+
+static uint16_t
+legacy_set_queue_irq(struct virtio_crypto_hw *hw, struct virtqueue *vq, uint16_t vec)
+{
+	uint16_t dst;
+
+	rte_pci_ioport_write(VTPCI_IO(hw), &vq->vq_queue_index, 2,
+		VIRTIO_PCI_QUEUE_SEL);
+	rte_pci_ioport_write(VTPCI_IO(hw), &vec, 2, VIRTIO_MSI_QUEUE_VECTOR);
+	rte_pci_ioport_read(VTPCI_IO(hw), &dst, 2, VIRTIO_MSI_QUEUE_VECTOR);
+	return dst;
+}
+
+static uint16_t
+legacy_get_queue_num(struct virtio_crypto_hw *hw, uint16_t queue_id)
+{
+	uint16_t dst;
+
+	rte_pci_ioport_write(VTPCI_IO(hw), &queue_id, 2, VIRTIO_PCI_QUEUE_SEL);
+	rte_pci_ioport_read(VTPCI_IO(hw), &dst, 2, VIRTIO_PCI_QUEUE_NUM);
+	return dst;
+}
+
+static int
+legacy_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+	uint32_t src;
+
+	if (!check_vq_phys_addr_ok(vq))
+		return -1;
+
+	rte_pci_ioport_write(VTPCI_IO(hw), &vq->vq_queue_index, 2,
+		VIRTIO_PCI_QUEUE_SEL);
+	src = vq->vq_ring_mem >> VIRTIO_PCI_QUEUE_ADDR_SHIFT;
+	rte_pci_ioport_write(VTPCI_IO(hw), &src, 4, VIRTIO_PCI_QUEUE_PFN);
+
+	return 0;
+}
+
+static void
+legacy_del_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+	uint32_t src = 0;
+
+	rte_pci_ioport_write(VTPCI_IO(hw), &vq->vq_queue_index, 2,
+		VIRTIO_PCI_QUEUE_SEL);
+	rte_pci_ioport_write(VTPCI_IO(hw), &src, 4, VIRTIO_PCI_QUEUE_PFN);
+}
+
+static void
+legacy_notify_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+	rte_pci_ioport_write(VTPCI_IO(hw), &vq->vq_queue_index, 2,
+		VIRTIO_PCI_QUEUE_NOTIFY);
+}
+
+const struct virtio_pci_ops legacy_ops_crypto = {
+	.read_dev_cfg	= legacy_read_dev_config,
+	.write_dev_cfg	= legacy_write_dev_config,
+	.reset		= legacy_reset,
+	.get_status	= legacy_get_status,
+	.set_status	= legacy_set_status,
+	.get_features	= legacy_get_features,
+	.set_features	= legacy_set_features,
+	.get_isr	= legacy_get_isr,
+	.set_config_irq	= legacy_set_config_irq,
+	.set_queue_irq  = legacy_set_queue_irq,
+	.get_queue_num	= legacy_get_queue_num,
+	.setup_queue	= legacy_setup_queue,
+	.del_queue	= legacy_del_queue,
+	.notify_queue	= legacy_notify_queue,
+};
+
+static inline void
+io_write64_twopart(uint64_t val, uint32_t *lo, uint32_t *hi)
+{
+	rte_write32(val & ((1ULL << 32) - 1), lo);
+	rte_write32(val >> 32,		     hi);
+}
+
+static void
+modern_read_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+		       void *dst, int length)
+{
+	int i;
+	uint8_t *p;
+	uint8_t old_gen, new_gen;
+
+	do {
+		old_gen = rte_read8(&hw->common_cfg->config_generation);
+
+		p = dst;
+		for (i = 0;  i < length; i++)
+			*p++ = rte_read8((uint8_t *)hw->dev_cfg + offset + i);
+
+		new_gen = rte_read8(&hw->common_cfg->config_generation);
+	} while (old_gen != new_gen);
+}
+
+static void
+modern_write_dev_config(struct virtio_crypto_hw *hw, size_t offset,
+			const void *src, int length)
+{
+	int i;
+	const uint8_t *p = src;
+
+	for (i = 0;  i < length; i++)
+		rte_write8((*p++), (((uint8_t *)hw->dev_cfg) + offset + i));
+}
+
+static uint64_t
+modern_get_features(struct virtio_crypto_hw *hw)
+{
+	uint32_t features_lo, features_hi;
+
+	rte_write32(0, &hw->common_cfg->device_feature_select);
+	features_lo = rte_read32(&hw->common_cfg->device_feature);
+
+	rte_write32(1, &hw->common_cfg->device_feature_select);
+	features_hi = rte_read32(&hw->common_cfg->device_feature);
+
+	return ((uint64_t)features_hi << 32) | features_lo;
+}
+
+static void
+modern_set_features(struct virtio_crypto_hw *hw, uint64_t features)
+{
+	rte_write32(0, &hw->common_cfg->guest_feature_select);
+	rte_write32(features & ((1ULL << 32) - 1),
+		    &hw->common_cfg->guest_feature);
+
+	rte_write32(1, &hw->common_cfg->guest_feature_select);
+	rte_write32(features >> 32,
+		    &hw->common_cfg->guest_feature);
+}
+
+static uint8_t
+modern_get_status(struct virtio_crypto_hw *hw)
+{
+	return rte_read8(&hw->common_cfg->device_status);
+}
+
+static void
+modern_set_status(struct virtio_crypto_hw *hw, uint8_t status)
+{
+	rte_write8(status, &hw->common_cfg->device_status);
+}
+
+static void
+modern_reset(struct virtio_crypto_hw *hw)
+{
+	modern_set_status(hw, VIRTIO_CONFIG_STATUS_RESET);
+	modern_get_status(hw);
+}
+
+static uint8_t
+modern_get_isr(struct virtio_crypto_hw *hw)
+{
+	return rte_read8(hw->isr);
+}
+
+static uint16_t
+modern_set_config_irq(struct virtio_crypto_hw *hw, uint16_t vec)
+{
+	rte_write16(vec, &hw->common_cfg->msix_config);
+	return rte_read16(&hw->common_cfg->msix_config);
+}
+
+static uint16_t
+modern_set_queue_irq(struct virtio_crypto_hw *hw, struct virtqueue *vq, uint16_t vec)
+{
+	rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+	rte_write16(vec, &hw->common_cfg->queue_msix_vector);
+	return rte_read16(&hw->common_cfg->queue_msix_vector);
+}
+
+static uint16_t
+modern_get_queue_num(struct virtio_crypto_hw *hw, uint16_t queue_id)
+{
+	rte_write16(queue_id, &hw->common_cfg->queue_select);
+	return rte_read16(&hw->common_cfg->queue_size);
+}
+
+static int
+modern_setup_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+	uint64_t desc_addr, avail_addr, used_addr;
+	uint16_t notify_off;
+
+	if (!check_vq_phys_addr_ok(vq))
+		return -1;
+
+	desc_addr = vq->vq_ring_mem;
+	avail_addr = desc_addr + vq->vq_nentries * sizeof(struct vring_desc);
+	used_addr = RTE_ALIGN_CEIL(avail_addr + offsetof(struct vring_avail,
+							 ring[vq->vq_nentries]),
+				   VIRTIO_PCI_VRING_ALIGN);
+
+	rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+	io_write64_twopart(desc_addr, &hw->common_cfg->queue_desc_lo,
+				      &hw->common_cfg->queue_desc_hi);
+	io_write64_twopart(avail_addr, &hw->common_cfg->queue_avail_lo,
+				       &hw->common_cfg->queue_avail_hi);
+	io_write64_twopart(used_addr, &hw->common_cfg->queue_used_lo,
+				      &hw->common_cfg->queue_used_hi);
+
+	notify_off = rte_read16(&hw->common_cfg->queue_notify_off);
+	vq->notify_addr = (void *)((uint8_t *)hw->notify_base +
+				notify_off * hw->notify_off_multiplier);
+
+	rte_write16(1, &hw->common_cfg->queue_enable);
+
+	PMD_INIT_LOG(DEBUG, "queue %u addresses:", vq->vq_queue_index);
+	PMD_INIT_LOG(DEBUG, "\t desc_addr: %" PRIx64, desc_addr);
+	PMD_INIT_LOG(DEBUG, "\t aval_addr: %" PRIx64, avail_addr);
+	PMD_INIT_LOG(DEBUG, "\t used_addr: %" PRIx64, used_addr);
+	PMD_INIT_LOG(DEBUG, "\t notify addr: %p (notify offset: %u)",
+		vq->notify_addr, notify_off);
+
+	return 0;
+}
+
+static void
+modern_del_queue(struct virtio_crypto_hw *hw, struct virtqueue *vq)
+{
+	rte_write16(vq->vq_queue_index, &hw->common_cfg->queue_select);
+
+	io_write64_twopart(0, &hw->common_cfg->queue_desc_lo,
+				  &hw->common_cfg->queue_desc_hi);
+	io_write64_twopart(0, &hw->common_cfg->queue_avail_lo,
+				  &hw->common_cfg->queue_avail_hi);
+	io_write64_twopart(0, &hw->common_cfg->queue_used_lo,
+				  &hw->common_cfg->queue_used_hi);
+
+	rte_write16(0, &hw->common_cfg->queue_enable);
+}
+
+static void
+modern_notify_queue(struct virtio_crypto_hw *hw __rte_unused, struct virtqueue *vq)
+{
+	rte_write16(vq->vq_queue_index, vq->notify_addr);
+}
+
+const struct virtio_pci_ops modern_ops_crypto = {
+	.read_dev_cfg	= modern_read_dev_config,
+	.write_dev_cfg	= modern_write_dev_config,
+	.reset		= modern_reset,
+	.get_status	= modern_get_status,
+	.set_status	= modern_set_status,
+	.get_features	= modern_get_features,
+	.set_features	= modern_set_features,
+	.get_isr	= modern_get_isr,
+	.set_config_irq	= modern_set_config_irq,
+	.set_queue_irq  = modern_set_queue_irq,
+	.get_queue_num	= modern_get_queue_num,
+	.setup_queue	= modern_setup_queue,
+	.del_queue	= modern_del_queue,
+	.notify_queue	= modern_notify_queue,
+};
+
+void
+vtpci_read_dev_config_crypto(struct virtio_crypto_hw *hw, size_t offset,
+		      void *dst, int length)
+{
+	VTPCI_OPS(hw)->read_dev_cfg(hw, offset, dst, length);
+}
+
+void
+vtpci_write_dev_config_crypto(struct virtio_crypto_hw *hw, size_t offset,
+		       const void *src, int length)
+{
+	VTPCI_OPS(hw)->write_dev_cfg(hw, offset, src, length);
+}
+
+uint64_t
+vtpci_negotiate_features_crypto(struct virtio_crypto_hw *hw, uint64_t host_features)
+{
+	uint64_t features;
+
+	/*
+	 * Limit negotiated features to what the driver, virtqueue, and
+	 * host all support.
+	 */
+	features = host_features & hw->guest_features;
+	VTPCI_OPS(hw)->set_features(hw, features);
+
+	return features;
+}
+
+void
+vtpci_reset_crypto(struct virtio_crypto_hw *hw)
+{
+	VTPCI_OPS(hw)->set_status(hw, VIRTIO_CONFIG_STATUS_RESET);
+	/* flush status write */
+	VTPCI_OPS(hw)->get_status(hw);
+}
+
+void
+vtpci_reinit_complete_crypto(struct virtio_crypto_hw *hw)
+{
+	vtpci_set_status_crypto(hw, VIRTIO_CONFIG_STATUS_DRIVER_OK);
+}
+
+void
+vtpci_set_status_crypto(struct virtio_crypto_hw *hw, uint8_t status)
+{
+	if (status != VIRTIO_CONFIG_STATUS_RESET)
+		status |= VTPCI_OPS(hw)->get_status(hw);
+
+	VTPCI_OPS(hw)->set_status(hw, status);
+}
+
+uint8_t
+vtpci_get_status_crypto(struct virtio_crypto_hw *hw)
+{
+	return VTPCI_OPS(hw)->get_status(hw);
+}
+
+uint8_t
+vtpci_isr_crypto(struct virtio_crypto_hw *hw)
+{
+	return VTPCI_OPS(hw)->get_isr(hw);
+}
+
+static void *
+get_cfg_addr(struct rte_pci_device *dev, struct virtio_pci_cap *cap)
+{
+	uint8_t  bar    = cap->bar;
+	uint32_t length = cap->length;
+	uint32_t offset = cap->offset;
+	uint8_t *base;
+
+	if (bar >= PCI_MAX_RESOURCE) {
+		PMD_INIT_LOG(ERR, "invalid bar: %u", bar);
+		return NULL;
+	}
+
+	if (offset + length < offset) {
+		PMD_INIT_LOG(ERR, "offset(%u) + length(%u) overflows",
+			offset, length);
+		return NULL;
+	}
+
+	if (offset + length > dev->mem_resource[bar].len) {
+		PMD_INIT_LOG(ERR,
+			"invalid cap: overflows bar space: %u > %" PRIu64,
+			offset + length, dev->mem_resource[bar].len);
+		return NULL;
+	}
+
+	base = dev->mem_resource[bar].addr;
+	if (base == NULL) {
+		PMD_INIT_LOG(ERR, "bar %u base addr is NULL", bar);
+		return NULL;
+	}
+
+	return base + offset;
+}
+
+#define PCI_MSIX_ENABLE 0x8000
+
+static int
+virtio_read_caps(struct rte_pci_device *dev, struct virtio_crypto_hw *hw)
+{
+	uint8_t pos;
+	struct virtio_pci_cap cap;
+	int ret;
+
+	if (rte_pci_map_device(dev)) {
+		PMD_INIT_LOG(DEBUG, "failed to map pci device!");
+		return -1;
+	}
+
+	ret = rte_pci_read_config(dev, &pos, 1, PCI_CAPABILITY_LIST);
+	if (ret < 0) {
+		PMD_INIT_LOG(DEBUG, "failed to read pci capability list");
+		return -1;
+	}
+
+	while (pos) {
+		ret = rte_pci_read_config(dev, &cap, sizeof(cap), pos);
+		if (ret < 0) {
+			PMD_INIT_LOG(ERR,
+				"failed to read pci cap at pos: %x", pos);
+			break;
+		}
+
+		if (cap.cap_vndr == PCI_CAP_ID_MSIX) {
+			/* Transitional devices would also have this capability,
+			 * that's why we also check if msix is enabled.
+			 * 1st byte is cap ID; 2nd byte is the position of next
+			 * cap; next two bytes are the flags.
+			 */
+			uint16_t flags = ((uint16_t *)&cap)[1];
+
+			if (flags & PCI_MSIX_ENABLE)
+				hw->use_msix = 1;
+		}
+
+		if (cap.cap_vndr != PCI_CAP_ID_VNDR) {
+			PMD_INIT_LOG(DEBUG,
+				"[%2x] skipping non VNDR cap id: %02x",
+				pos, cap.cap_vndr);
+			goto next;
+		}
+
+		PMD_INIT_LOG(DEBUG,
+			"[%2x] cfg type: %u, bar: %u, offset: %04x, len: %u",
+			pos, cap.cfg_type, cap.bar, cap.offset, cap.length);
+
+		switch (cap.cfg_type) {
+		case VIRTIO_PCI_CAP_COMMON_CFG:
+			hw->common_cfg = get_cfg_addr(dev, &cap);
+			break;
+		case VIRTIO_PCI_CAP_NOTIFY_CFG:
+			rte_pci_read_config(dev, &hw->notify_off_multiplier,
+					4, pos + sizeof(cap));
+			hw->notify_base = get_cfg_addr(dev, &cap);
+			break;
+		case VIRTIO_PCI_CAP_DEVICE_CFG:
+			hw->dev_cfg = get_cfg_addr(dev, &cap);
+			break;
+		case VIRTIO_PCI_CAP_ISR_CFG:
+			hw->isr = get_cfg_addr(dev, &cap);
+			break;
+		}
+
+next:
+		pos = cap.cap_next;
+	}
+
+	if (hw->common_cfg == NULL || hw->notify_base == NULL ||
+	    hw->dev_cfg == NULL    || hw->isr == NULL) {
+		PMD_INIT_LOG(INFO, "no modern virtio pci device found.");
+		return -1;
+	}
+
+	PMD_INIT_LOG(INFO, "found modern virtio pci device.");
+
+	PMD_INIT_LOG(DEBUG, "common cfg mapped at: %p", hw->common_cfg);
+	PMD_INIT_LOG(DEBUG, "device cfg mapped at: %p", hw->dev_cfg);
+	PMD_INIT_LOG(DEBUG, "isr cfg mapped at: %p", hw->isr);
+	PMD_INIT_LOG(DEBUG, "notify base: %p, notify off multiplier: %u",
+		hw->notify_base, hw->notify_off_multiplier);
+
+	return 0;
+}
+
+/*
+ * Return -1:
+ *   if there is error mapping with VFIO/UIO.
+ *   if port map error when driver type is KDRV_NONE.
+ *   if whitelisted but driver type is KDRV_UNKNOWN.
+ * Return 1 if kernel driver is managing the device.
+ * Return 0 on success.
+ */
+int
+vtpci_init_crypto(struct rte_pci_device *dev, struct virtio_crypto_hw *hw)
+{
+	/*
+	 * Try if we can succeed reading virtio pci caps, which exists
+	 * only on modern pci device. If failed, we fallback to legacy
+	 * virtio handling.
+	 */
+	if (virtio_read_caps(dev, hw) == 0) {
+		PMD_INIT_LOG(INFO, "modern virtio pci detected.");
+		virtio_hw_internal[hw->dev_id].vtpci_ops = &modern_ops_crypto;
+		hw->modern = 1;
+		return 0;
+	}
+
+	PMD_INIT_LOG(INFO, "trying with legacy virtio pci.");
+	if (rte_pci_ioport_map(dev, 0, VTPCI_IO(hw)) < 0) {
+		if (dev->kdrv == RTE_KDRV_UNKNOWN &&
+		    (!dev->device.devargs ||
+		     dev->device.devargs->bus !=
+		     rte_bus_find_by_name("pci"))) {
+		     PMD_INIT_LOG(INFO,
+				"skip kernel managed virtio device.");
+			 return -1;
+		}
+		return -1;
+	}
+
+	virtio_hw_internal[hw->dev_id].vtpci_ops = &legacy_ops_crypto;
+	hw->modern = 0;
+
+	return 0;
+}
diff --git a/drivers/crypto/virtio/virtio_pci.h b/drivers/crypto/virtio/virtio_pci.h
new file mode 100644
index 0000000..888a4c2
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_pci.h
@@ -0,0 +1,286 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _VIRTIO_PCI_H_
+#define _VIRTIO_PCI_H_
+
+#include <stdint.h>
+
+#ifdef __FreeBSD__
+#include <sys/types.h>
+#include <machine/cpufunc.h>
+#else
+#include <sys/io.h>
+#endif
+
+#include <rte_cryptodev.h>
+#include <rte_bus_pci.h>
+
+struct virtqueue;
+
+/* VirtIO PCI vendor/device ID. */
+#define VIRTIO_CRYPTO_PCI_VENDORID        0x1AF4
+#define VIRTIO_CRYPTO_PCI_LEGACY_DEVICEID 0x1014
+#define VIRTIO_CRYPTO_PCI_MODERN_DEVICEID 0x1054
+
+/* VirtIO ABI version, this must match exactly. */
+#define VIRTIO_PCI_ABI_VERSION 0
+
+/*
+ * VirtIO Header, located in BAR 0.
+ */
+#define VIRTIO_PCI_HOST_FEATURES  0  /* host's supported features (32bit, RO)*/
+#define VIRTIO_PCI_GUEST_FEATURES 4  /* guest's supported features (32, RW) */
+#define VIRTIO_PCI_QUEUE_PFN	  8  /* physical address of VQ (32, RW) */
+#define VIRTIO_PCI_QUEUE_NUM	  12 /* number of ring entries (16, RO) */
+#define VIRTIO_PCI_QUEUE_SEL	  14 /* current VQ selection (16, RW) */
+#define VIRTIO_PCI_QUEUE_NOTIFY   16 /* notify host regarding VQ (16, RW) */
+#define VIRTIO_PCI_STATUS	  18 /* device status register (8, RW) */
+/* interrupt status register, reading also clears the register (8, RO) */
+#define VIRTIO_PCI_ISR		  19
+/* Only if MSIX is enabled: */
+#define VIRTIO_MSI_CONFIG_VECTOR  20 /* configuration change vector (16, RW) */
+/* vector for selected VQ notifications (16, RW) */
+#define VIRTIO_MSI_QUEUE_VECTOR	  22
+
+/* The bit of the ISR which indicates a device has an interrupt. */
+#define VIRTIO_PCI_ISR_INTR   0x1
+/* The bit of the ISR which indicates a device configuration change. */
+#define VIRTIO_PCI_ISR_CONFIG 0x2
+/* Vector value used to disable MSI for queue. */
+#define VIRTIO_MSI_NO_VECTOR 0xFFFF
+
+/* Status byte for guest to report progress. */
+#define VIRTIO_CONFIG_STATUS_RESET     0x00
+#define VIRTIO_CONFIG_STATUS_ACK       0x01
+#define VIRTIO_CONFIG_STATUS_DRIVER    0x02
+#define VIRTIO_CONFIG_STATUS_DRIVER_OK 0x04
+#define VIRTIO_CONFIG_STATUS_FEATURES_OK 0x08
+#define VIRTIO_CONFIG_STATUS_FAILED    0x80
+
+/*
+ * Each virtqueue indirect descriptor list must be physically contiguous.
+ * To allow us to malloc(9) each list individually, limit the number
+ * supported to what will fit in one page. With 4KB pages, this is a limit
+ * of 256 descriptors. If there is ever a need for more, we can switch to
+ * contigmalloc(9) for the larger allocations, similar to what
+ * bus_dmamem_alloc(9) does.
+ *
+ * Note the sizeof(struct vring_desc) is 16 bytes.
+ */
+#define VIRTIO_MAX_INDIRECT ((int) (PAGE_SIZE / 16))
+
+/* The feature bitmap for virtio crypto */
+
+/* virtio_crypto_config.status available */
+#define VIRTIO_CRYPTO_F_STATUS          0
+/* Device supports Receive Flow Steering */
+#define VIRTIO_CRYPTO_F_MQ		        1
+
+/*
+ * Do we get callbacks when the ring is completely used, even if we've
+ * suppressed them?
+ */
+#define VIRTIO_F_NOTIFY_ON_EMPTY	24
+
+/* Can the device handle any descriptor layout? */
+#define VIRTIO_F_ANY_LAYOUT		27
+
+/* We support indirect buffer descriptors */
+#define VIRTIO_RING_F_INDIRECT_DESC	28
+
+#define VIRTIO_F_VERSION_1		32
+#define VIRTIO_F_IOMMU_PLATFORM	33
+
+/*
+ * The Guest publishes the used index for which it expects an interrupt
+ * at the end of the avail ring. Host should ignore the avail->flags field.
+ */
+/*
+ * The Host publishes the avail index for which it expects a kick
+ * at the end of the used ring. Guest should ignore the used->flags field.
+ */
+#define VIRTIO_RING_F_EVENT_IDX		29
+
+/*
+ * Maximum number of virtqueues per device.
+ */
+#define VIRTIO_MAX_VIRTQUEUES_PAIRS 8
+
+/* Common configuration */
+#define VIRTIO_PCI_CAP_COMMON_CFG	1
+/* Notifications */
+#define VIRTIO_PCI_CAP_NOTIFY_CFG	2
+/* ISR Status */
+#define VIRTIO_PCI_CAP_ISR_CFG		3
+/* Device specific configuration */
+#define VIRTIO_PCI_CAP_DEVICE_CFG	4
+/* PCI configuration access */
+#define VIRTIO_PCI_CAP_PCI_CFG		5
+
+/* This is the PCI capability header: */
+struct virtio_pci_cap {
+	uint8_t cap_vndr;		/* Generic PCI field: PCI_CAP_ID_VNDR */
+	uint8_t cap_next;		/* Generic PCI field: next ptr. */
+	uint8_t cap_len;		/* Generic PCI field: capability length */
+	uint8_t cfg_type;		/* Identifies the structure. */
+	uint8_t bar;			/* Where to find it. */
+	uint8_t padding[3];		/* Pad to full dword. */
+	uint32_t offset;		/* Offset within bar. */
+	uint32_t length;		/* Length of the structure, in bytes. */
+};
+
+struct virtio_pci_notify_cap {
+	struct virtio_pci_cap cap;
+	uint32_t notify_off_multiplier;	/* Multiplier for queue_notify_off. */
+};
+
+/* Fields in VIRTIO_PCI_CAP_COMMON_CFG: */
+struct virtio_pci_common_cfg {
+	/* About the whole device. */
+	uint32_t device_feature_select;	/* read-write */
+	uint32_t device_feature;	/* read-only */
+	uint32_t guest_feature_select;	/* read-write */
+	uint32_t guest_feature;		/* read-write */
+	uint16_t msix_config;		/* read-write */
+	uint16_t num_queues;		/* read-only */
+	uint8_t device_status;		/* read-write */
+	uint8_t config_generation;	/* read-only */
+
+	/* About a specific virtqueue. */
+	uint16_t queue_select;		/* read-write */
+	uint16_t queue_size;		/* read-write, power of 2. */
+	uint16_t queue_msix_vector;	/* read-write */
+	uint16_t queue_enable;		/* read-write */
+	uint16_t queue_notify_off;	/* read-only */
+	uint32_t queue_desc_lo;		/* read-write */
+	uint32_t queue_desc_hi;		/* read-write */
+	uint32_t queue_avail_lo;	/* read-write */
+	uint32_t queue_avail_hi;	/* read-write */
+	uint32_t queue_used_lo;		/* read-write */
+	uint32_t queue_used_hi;		/* read-write */
+};
+
+struct virtio_crypto_hw;
+
+struct virtio_pci_ops {
+	void (*read_dev_cfg)(struct virtio_crypto_hw *hw, size_t offset,
+			     void *dst, int len);
+	void (*write_dev_cfg)(struct virtio_crypto_hw *hw, size_t offset,
+			      const void *src, int len);
+	void (*reset)(struct virtio_crypto_hw *hw);
+
+	uint8_t (*get_status)(struct virtio_crypto_hw *hw);
+	void    (*set_status)(struct virtio_crypto_hw *hw, uint8_t status);
+
+	uint64_t (*get_features)(struct virtio_crypto_hw *hw);
+	void     (*set_features)(struct virtio_crypto_hw *hw, uint64_t features);
+
+	uint8_t (*get_isr)(struct virtio_crypto_hw *hw);
+
+	uint16_t (*set_config_irq)(struct virtio_crypto_hw *hw, uint16_t vec);
+
+	uint16_t (*set_queue_irq)(struct virtio_crypto_hw *hw, struct virtqueue *vq,
+			uint16_t vec);
+
+	uint16_t (*get_queue_num)(struct virtio_crypto_hw *hw, uint16_t queue_id);
+	int (*setup_queue)(struct virtio_crypto_hw *hw, struct virtqueue *vq);
+	void (*del_queue)(struct virtio_crypto_hw *hw, struct virtqueue *vq);
+	void (*notify_queue)(struct virtio_crypto_hw *hw, struct virtqueue *vq);
+};
+
+struct virtio_crypto_config;
+
+struct virtio_crypto_hw {
+	/* control queue */
+	struct virtqueue *cvq;
+	uint16_t    dev_id;
+	uint16_t	max_dataqueues;
+	uint64_t    req_guest_features;
+	uint64_t    guest_features;
+	uint8_t	    use_msix;
+	uint8_t     modern;
+	uint32_t    notify_off_multiplier;
+	uint8_t     *isr;
+	uint16_t    *notify_base;
+	struct virtio_pci_common_cfg *common_cfg;
+	struct virtio_crypto_config *dev_cfg;
+};
+
+/*
+ * While virtio_crypto_hw is stored in shared memory, this structure stores
+ * some infos that may vary in the multiple process model locally.
+ * For example, the vtpci_ops pointer.
+ */
+struct virtio_hw_internal {
+	const struct virtio_pci_ops *vtpci_ops;
+	struct rte_pci_ioport io;
+};
+
+#define VTPCI_OPS(hw)	(virtio_hw_internal[(hw)->dev_id].vtpci_ops)
+#define VTPCI_IO(hw)	(&virtio_hw_internal[(hw)->dev_id].io)
+
+extern struct virtio_hw_internal virtio_hw_internal[RTE_MAX_VIRTIO_CRYPTO];
+
+/*
+ * How many bits to shift physical queue address written to QUEUE_PFN.
+ * 12 is historical, and due to x86 page size.
+ */
+#define VIRTIO_PCI_QUEUE_ADDR_SHIFT 12
+
+/* The alignment to use between consumer and producer parts of vring. */
+#define VIRTIO_PCI_VRING_ALIGN 4096
+
+/*
+ * Function declaration from virtio_pci.c
+ */
+int vtpci_init_crypto(struct rte_pci_device *dev, struct virtio_crypto_hw *hw);
+
+void vtpci_reset_crypto(struct virtio_crypto_hw *);
+
+void vtpci_reinit_complete_crypto(struct virtio_crypto_hw *);
+
+uint8_t vtpci_get_status_crypto(struct virtio_crypto_hw *);
+void vtpci_set_status_crypto(struct virtio_crypto_hw *, uint8_t);
+
+uint64_t vtpci_negotiate_features_crypto(struct virtio_crypto_hw *, uint64_t);
+
+void vtpci_write_dev_config_crypto(struct virtio_crypto_hw *, size_t, const void *, int);
+
+void vtpci_read_dev_config_crypto(struct virtio_crypto_hw *, uint64_t, void *, int);
+
+uint8_t vtpci_isr_crypto(struct virtio_crypto_hw *);
+
+uint16_t vtpci_irq_config(struct virtio_crypto_hw *, uint16_t);
+
+#endif /* _VIRTIO_PCI_H_ */
diff --git a/drivers/crypto/virtio/virtio_ring.h b/drivers/crypto/virtio/virtio_ring.h
new file mode 100644
index 0000000..00209d4
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_ring.h
@@ -0,0 +1,169 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _VIRTIO_RING_H_
+#define _VIRTIO_RING_H_
+
+#include <stdint.h>
+
+#include <rte_common.h>
+
+/* This marks a buffer as continuing via the next field. */
+#define VRING_DESC_F_NEXT		1
+/* This marks a buffer as write-only (otherwise read-only). */
+#define VRING_DESC_F_WRITE		2
+/* This means the buffer contains a list of buffer descriptors. */
+#define VRING_DESC_F_INDIRECT	4
+
+/*
+ * The Host uses this in used->flags to advise the Guest: don't kick me
+ * when you add a buffer.  It's unreliable, so it's simply an
+ * optimization.  Guest will still kick if it's out of buffers.
+ */
+#define VRING_USED_F_NO_NOTIFY	1
+/*
+ * The Guest uses this in avail->flags to advise the Host: don't
+ * interrupt me when you consume a buffer.	It's unreliable, so it's
+ * simply an optimization.
+ */
+#define VRING_AVAIL_F_NO_INTERRUPT	1
+
+/*
+ * VirtIO ring descriptors: 16 bytes.
+ * These can chain together via "next".
+ */
+struct vring_desc {
+	uint64_t addr;	/* Address (guest-physical). */
+	uint32_t len;	/* Length. */
+	uint16_t flags; /* The flags as indicated above. */
+	uint16_t next;	/* We chain unused descriptors via this. */
+};
+
+struct vring_avail {
+	uint16_t flags;
+	uint16_t idx;
+	uint16_t ring[0];
+};
+
+/* id is a 16bit index. uint32_t is used here for ids for padding reasons. */
+struct vring_used_elem {
+	/* Index of start of used descriptor chain. */
+	uint32_t id;
+	/* Total length of the descriptor chain which was written to. */
+	uint32_t len;
+};
+
+struct vring_used {
+	uint16_t flags;
+	uint16_t idx;
+	struct vring_used_elem ring[0];
+};
+
+struct vring {
+	unsigned int num;
+	struct vring_desc  *desc;
+	struct vring_avail *avail;
+	struct vring_used  *used;
+};
+
+/* The standard layout for the ring is a continuous chunk of memory which
+ * looks like this.  We assume num is a power of 2.
+ *
+ * struct vring {
+ *		// The actual descriptors (16 bytes each)
+ *		struct vring_desc desc[num];
+ *
+ *		// A ring of available descriptor heads with free-running index.
+ *		__u16 avail_flags;
+ *		__u16 avail_idx;
+ *		__u16 available[num];
+ *		__u16 used_event_idx;
+ *
+ *		// Padding to the next align boundary.
+ *		char pad[];
+ *
+ *		// A ring of used descriptor heads with free-running index.
+ *		__u16 used_flags;
+ *		__u16 used_idx;
+ *		struct vring_used_elem used[num];
+ *		__u16 avail_event_idx;
+ * };
+ *
+ * NOTE: for VirtIO PCI, align is 4096.
+ */
+
+/*
+ * We publish the used event index at the end of the available ring, and vice
+ * versa. They are at the end for backwards compatibility.
+ */
+#define vring_used_event(vr)  ((vr)->avail->ring[(vr)->num])
+#define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num])
+
+static inline size_t
+vring_crypto_size(unsigned int num, unsigned long align)
+{
+	size_t size;
+
+	size = num * sizeof(struct vring_desc);
+	size += sizeof(struct vring_avail) + (num * sizeof(uint16_t));
+	size = RTE_ALIGN_CEIL(size, align);
+	size += sizeof(struct vring_used) +
+		(num * sizeof(struct vring_used_elem));
+	return size;
+}
+
+static inline void
+vring_crypto_init(struct vring *vr, unsigned int num, uint8_t *p,
+	unsigned long align)
+{
+	vr->num = num;
+	vr->desc = (struct vring_desc *) p;
+	vr->avail = (struct vring_avail *) (p +
+		num * sizeof(struct vring_desc));
+	vr->used = (void *)
+		RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align);
+}
+
+/*
+ * The following is used with VIRTIO_RING_F_EVENT_IDX.
+ * Assuming a given event_idx value from the other size, if we have
+ * just incremented index from old to new_idx, should we trigger an
+ * event?
+ */
+static inline int
+vring_crypto_need_event(uint16_t event_idx, uint16_t new_idx, uint16_t old)
+{
+	return (uint16_t)(new_idx - event_idx - 1) < (uint16_t)(new_idx - old);
+}
+
+#endif /* _VIRTIO_RING_H_ */
diff --git a/drivers/crypto/virtio/virtio_rxtx.c b/drivers/crypto/virtio/virtio_rxtx.c
new file mode 100644
index 0000000..6085a8f
--- /dev/null
+++ b/drivers/crypto/virtio/virtio_rxtx.c
@@ -0,0 +1,569 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <stdint.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+
+#include <rte_cycles.h>
+#include <rte_memory.h>
+#include <rte_memcpy.h>
+#include <rte_memzone.h>
+#include <rte_branch_prediction.h>
+#include <rte_mempool.h>
+#include <rte_malloc.h>
+#include <rte_mbuf.h>
+#include <rte_cryptodev.h>
+#include <rte_prefetch.h>
+#include <rte_string_fns.h>
+#include <rte_errno.h>
+#include <rte_byteorder.h>
+#include <rte_cryptodev_pmd.h>
+
+#include "virtio_logs.h"
+#include "virtio_cryptodev.h"
+#include "virtio_pci.h"
+#include "virtqueue.h"
+#include "virtio_crypto_algs.h"
+#include "virtio_crypto.h"
+
+#ifdef RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_DUMP
+#define VIRTIO_DUMP_PACKET(m, len) rte_pktmbuf_dump(stdout, m, len)
+#else
+#define  VIRTIO_DUMP_PACKET(m, len) do { } while (0)
+#endif
+
+#define NUM_ENTRY_VIRTIO_CRYPTO_TX 7
+
+enum rte_crypto_op_status status_transfer[VIRTIO_CRYPTO_INVSESS + 1] = {
+	RTE_CRYPTO_OP_STATUS_SUCCESS,
+	RTE_CRYPTO_OP_STATUS_ERROR,
+	RTE_CRYPTO_OP_STATUS_AUTH_FAILED,
+	RTE_CRYPTO_OP_STATUS_INVALID_ARGS,
+	RTE_CRYPTO_OP_STATUS_INVALID_SESSION,
+};
+
+static void
+vq_crypto_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx)
+{
+	struct vring_desc *dp, *dp_tail;
+	struct vq_desc_extra *dxp;
+	uint16_t desc_idx_last = desc_idx;
+
+	dp = &vq->vq_ring.desc[desc_idx];
+	dxp = &vq->vq_descx[desc_idx];
+	vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt + dxp->ndescs);
+	if ((dp->flags & VRING_DESC_F_INDIRECT) == 0) {
+		while (dp->flags & VRING_DESC_F_NEXT) {
+			desc_idx_last = dp->next;
+			dp = &vq->vq_ring.desc[dp->next];
+		}
+	}
+	dxp->ndescs = 0;
+
+	/*
+	 * We must append the existing free chain, if any, to the end of
+	 * newly freed chain. If the virtqueue was completely used, then
+	 * head would be VQ_RING_DESC_CHAIN_END (ASSERTed above).
+	 */
+	if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) {
+		vq->vq_desc_head_idx = desc_idx;
+	} else {
+		dp_tail = &vq->vq_ring.desc[vq->vq_desc_tail_idx];
+		dp_tail->next = desc_idx;
+	}
+
+	vq->vq_desc_tail_idx = desc_idx_last;
+	dp->next = VQ_RING_DESC_CHAIN_END;
+}
+
+static uint16_t
+virtqueue_crypto_dequeue_burst_rx(struct virtqueue *vq,
+		struct rte_crypto_op **rx_pkts, uint16_t num)
+{
+	struct vring_used_elem *uep;
+	struct rte_crypto_op *cop;
+	uint16_t used_idx, desc_idx;
+	uint16_t i;
+	struct virtio_crypto_inhdr *inhdr;
+
+	/* Caller does the check */
+	for (i = 0; i < num ; i++) {
+		used_idx = (uint16_t)(vq->vq_used_cons_idx
+				& (vq->vq_nentries - 1));
+		uep = &vq->vq_ring.used->ring[used_idx];
+		desc_idx = (uint16_t)uep->id;
+		cop = (struct rte_crypto_op *)
+				vq->vq_descx[desc_idx].crypto_op;
+
+		/*
+		 * status is saved just after virtio_crypto_op_data_req,
+		 * see virtqueue_crypto_sym_enqueue_xmit() for more details
+		 */
+		inhdr = (struct virtio_crypto_inhdr *)(
+			(uint8_t *)vq->vq_descx[desc_idx].malloc_addr +
+			sizeof(struct virtio_crypto_op_data_req));
+		if (inhdr->status == VIRTIO_CRYPTO_OK) {
+			cop->status = status_transfer[inhdr->status];
+		} else if (inhdr->status <= VIRTIO_CRYPTO_INVSESS) {
+			cop->status = status_transfer[inhdr->status];
+			vq->packets_received_failed++;
+		} else
+			vq->packets_received_failed++;
+
+		vq->packets_received_total++;
+
+		rx_pkts[i] = cop;
+		vq->vq_used_cons_idx++;
+		vq_crypto_ring_free_chain(vq, desc_idx);
+		vq->vq_descx[desc_idx].crypto_op = NULL;
+		rte_free(vq->vq_descx[desc_idx].malloc_addr);
+		vq->vq_descx[desc_idx].malloc_addr = NULL;
+	}
+
+	return i;
+}
+
+static int
+virtqueue_crypto_sym_pkt_header_arrange(
+		struct rte_crypto_op *cop,
+		struct virtio_crypto_op_data_req *data,
+		struct virtio_crypto_session *session)
+{
+	struct rte_crypto_sym_op *sym_op = cop->sym;
+	struct virtio_crypto_op_data_req *req_data = data;
+	struct virtio_crypto_op_ctrl_req *ctrl = &session->ctrl;
+	struct virtio_crypto_cipher_session_para *para;
+
+	req_data->header.session_id = session->session_id;
+
+	switch (ctrl->u.sym_create_session.op_type) {
+	case VIRTIO_CRYPTO_SYM_OP_CIPHER:
+		req_data->u.sym_req.op_type = VIRTIO_CRYPTO_SYM_OP_CIPHER;
+
+		para = &ctrl->u.sym_create_session.u.cipher.para;
+		if (para->op == VIRTIO_CRYPTO_OP_ENCRYPT)
+			req_data->header.opcode = VIRTIO_CRYPTO_CIPHER_ENCRYPT;
+		else
+			req_data->header.opcode = VIRTIO_CRYPTO_CIPHER_DECRYPT;
+
+		req_data->u.sym_req.u.cipher.para.iv_len
+			= session->iv.length;
+
+		req_data->u.sym_req.u.cipher.para.src_data_len =
+			(sym_op->cipher.data.length + sym_op->cipher.data.offset);
+		req_data->u.sym_req.u.cipher.para.dst_data_len =
+			req_data->u.sym_req.u.cipher.para.src_data_len;
+		break;
+	case VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING:
+		req_data->u.sym_req.op_type = VIRTIO_CRYPTO_SYM_OP_ALGORITHM_CHAINING;
+
+		para = &ctrl->u.sym_create_session.u.chain.para.cipher_param;
+		if (para->op == VIRTIO_CRYPTO_OP_ENCRYPT)
+			req_data->header.opcode = VIRTIO_CRYPTO_CIPHER_ENCRYPT;
+		else
+			req_data->header.opcode = VIRTIO_CRYPTO_CIPHER_DECRYPT;
+
+		req_data->u.sym_req.u.chain.para.iv_len = session->iv.length;
+		req_data->u.sym_req.u.chain.para.aad_len = session->aad.length;
+
+		req_data->u.sym_req.u.chain.para.src_data_len =
+			(sym_op->cipher.data.length + sym_op->cipher.data.offset);
+		req_data->u.sym_req.u.chain.para.dst_data_len =
+			req_data->u.sym_req.u.chain.para.src_data_len;
+		req_data->u.sym_req.u.chain.para.cipher_start_src_offset =
+			sym_op->cipher.data.offset;
+		req_data->u.sym_req.u.chain.para.len_to_cipher =
+			sym_op->cipher.data.length;
+		req_data->u.sym_req.u.chain.para.hash_start_src_offset =
+			sym_op->auth.data.offset;
+		req_data->u.sym_req.u.chain.para.len_to_hash =
+			sym_op->auth.data.length;
+		req_data->u.sym_req.u.chain.para.aad_len =
+			ctrl->u.sym_create_session.u.chain.para.aad_len;
+
+		if (ctrl->u.sym_create_session.u.chain.para.hash_mode ==
+			VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN)
+			req_data->u.sym_req.u.chain.para.hash_result_len =
+				ctrl->u.sym_create_session.u.chain.para.u.hash_param.
+				hash_result_len;
+		if (ctrl->u.sym_create_session.u.chain.para.hash_mode ==
+			VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)
+			req_data->u.sym_req.u.chain.para.hash_result_len =
+				ctrl->u.sym_create_session.u.chain.para.u.mac_param.
+				hash_result_len;
+		break;
+	default:
+		return -1;
+	}
+
+	return 0;
+}
+
+static int
+virtqueue_crypto_sym_enqueue_xmit(
+		struct virtqueue *txvq,
+		struct rte_crypto_op *cop)
+{
+	uint16_t idx = 0;
+	uint16_t num_entry;
+	uint16_t needed = 1;
+	uint16_t head_idx;
+	struct vq_desc_extra *dxp;
+	struct vring_desc *start_dp;
+	struct vring_desc *desc;
+	uint8_t *malloc_addr;
+	uint64_t malloc_phys_addr;
+	uint16_t req_data_len = sizeof(struct virtio_crypto_op_data_req);
+	uint32_t indirect_vring_addr_offset = req_data_len +
+		sizeof(struct virtio_crypto_inhdr);
+	struct rte_crypto_sym_op *sym_op = cop->sym;
+	struct virtio_crypto_session *session =
+		(struct virtio_crypto_session *)get_session_private_data(
+		cop->sym->session, cryptodev_virtio_driver_id);
+	struct virtio_crypto_op_data_req *op_data_req;
+	uint32_t hash_result_len = 0;
+
+	if (unlikely(sym_op->m_src->nb_segs != 1))
+		return -EMSGSIZE;
+	if (unlikely(txvq->vq_free_cnt == 0))
+		return -ENOSPC;
+	if (unlikely(txvq->vq_free_cnt < needed))
+		return -EMSGSIZE;
+	head_idx = txvq->vq_desc_head_idx;
+	if (unlikely(head_idx >= txvq->vq_nentries))
+		return -EFAULT;
+	if (unlikely(session == NULL))
+		return -EFAULT;
+
+	/*
+	 * malloc memory to store virtio_crypto_op_data_req, status returned
+	 * and indirect vring entries
+	 */
+	malloc_addr = rte_malloc(NULL, req_data_len	+ sizeof(struct virtio_crypto_inhdr)
+		+ NUM_ENTRY_VIRTIO_CRYPTO_TX * sizeof(struct vring_desc),
+		RTE_CACHE_LINE_SIZE);
+	if (malloc_addr == NULL) {
+		PMD_TX_LOG(ERR, "not enough heap room");
+		return -ENOSPC;
+	}
+	op_data_req = (struct virtio_crypto_op_data_req *)malloc_addr;
+	malloc_phys_addr = rte_malloc_virt2phy(malloc_addr);
+
+	if (virtqueue_crypto_sym_pkt_header_arrange(cop, op_data_req, session)) {
+		rte_free(malloc_addr);
+		return -EFAULT;
+	}
+
+	/* status is initialized to VIRTIO_CRYPTO_ERR */
+	((struct virtio_crypto_inhdr *)
+		((uint8_t *)malloc_addr + req_data_len))->status =
+		VIRTIO_CRYPTO_ERR;
+
+	/* point to indirect vring entry */
+	desc = (struct vring_desc *)
+		((uint8_t *)malloc_addr + indirect_vring_addr_offset);
+	for (idx = 0; idx < (NUM_ENTRY_VIRTIO_CRYPTO_TX - 1); idx++)
+		desc[idx].next = idx + 1;
+	desc[NUM_ENTRY_VIRTIO_CRYPTO_TX - 1].next = VQ_RING_DESC_CHAIN_END;
+
+	idx = 0;
+
+	/* indirect vring: first part, virtio_crypto_op_data_req */
+	desc[idx].addr = malloc_phys_addr;
+	desc[idx].len = req_data_len;
+	desc[idx++].flags = VRING_DESC_F_NEXT;
+
+	/* indirect vring: iv of cipher */
+	if (session->iv.length) {
+		desc[idx].addr = cop->phys_addr + session->iv.offset;
+		desc[idx].len = session->iv.length;
+		desc[idx++].flags = VRING_DESC_F_NEXT;
+	}
+
+	/* indirect vring: additional auth data */
+	if (session->aad.length) {
+		desc[idx].addr = session->aad.phys_addr;
+		desc[idx].len = session->aad.length;
+		desc[idx++].flags = VRING_DESC_F_NEXT;
+	}
+
+	/* indirect vring: src data */
+	desc[idx].addr = rte_pktmbuf_mtophys_offset(sym_op->m_src, 0);
+	desc[idx].len = (sym_op->cipher.data.offset
+		+ sym_op->cipher.data.length);
+	desc[idx++].flags = VRING_DESC_F_NEXT;
+
+	/* indirect vring: dst data */
+	if (sym_op->m_dst) {
+		desc[idx].addr = rte_pktmbuf_mtophys_offset(sym_op->m_dst, 0);
+		desc[idx].len = (sym_op->cipher.data.offset
+			+ sym_op->cipher.data.length);
+	} else {
+		desc[idx].addr = rte_pktmbuf_mtophys_offset(sym_op->m_src, 0);
+		desc[idx].len = (sym_op->cipher.data.offset
+			+ sym_op->cipher.data.length);
+	}
+	desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
+
+	/* indirect vring: digest result */
+	if (session->ctrl.u.sym_create_session.u.chain.para.hash_mode ==
+		VIRTIO_CRYPTO_SYM_HASH_MODE_PLAIN)
+		hash_result_len =
+			session->ctrl.u.sym_create_session.u.chain.para.u.hash_param.
+			hash_result_len;
+	if (session->ctrl.u.sym_create_session.u.chain.para.hash_mode ==
+		VIRTIO_CRYPTO_SYM_HASH_MODE_AUTH)
+		hash_result_len =
+			session->ctrl.u.sym_create_session.u.chain.para.u.mac_param.
+			hash_result_len;
+	if (hash_result_len > 0) {
+		desc[idx].addr = sym_op->auth.digest.phys_addr;
+		desc[idx].len = hash_result_len;
+		desc[idx++].flags = VRING_DESC_F_WRITE | VRING_DESC_F_NEXT;
+	}
+
+	/* indirect vring: last part, status returned */
+	desc[idx].addr = malloc_phys_addr + req_data_len;
+	desc[idx].len = sizeof(struct virtio_crypto_inhdr);
+	desc[idx++].flags = VRING_DESC_F_WRITE;
+
+	num_entry = idx;
+
+	/* save the infos to use when receiving packets */
+	dxp = &txvq->vq_descx[head_idx];
+	dxp->crypto_op = (void *)cop;
+	dxp->malloc_addr = malloc_addr;
+	dxp->ndescs = needed;
+
+	/* use a single buffer */
+	start_dp = txvq->vq_ring.desc;
+	start_dp[head_idx].addr = malloc_phys_addr + indirect_vring_addr_offset;
+	start_dp[head_idx].len = num_entry * sizeof(struct vring_desc);
+	start_dp[head_idx].flags = VRING_DESC_F_INDIRECT;
+
+	idx = start_dp[head_idx].next;
+	txvq->vq_desc_head_idx = idx;
+	if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END)
+		txvq->vq_desc_tail_idx = idx;
+	txvq->vq_free_cnt = (uint16_t)(txvq->vq_free_cnt - needed);
+	vq_crypto_update_avail_ring(txvq, head_idx);
+
+	return 0;
+}
+
+static int
+virtqueue_crypto_enqueue_xmit(struct virtqueue *txvq,
+		struct rte_crypto_op *cop)
+{
+	int ret;
+
+	switch (cop->type) {
+	case RTE_CRYPTO_OP_TYPE_SYMMETRIC:
+		ret = virtqueue_crypto_sym_enqueue_xmit(txvq, cop);
+		break;
+	default:
+		PMD_TX_LOG(ERR, "invalid crypto op type %u", cop->type);
+		ret = -EFAULT;
+		break;
+	}
+
+	return ret;
+}
+
+static int
+virtio_crypto_vring_start(struct virtqueue *vq)
+{
+	struct virtio_crypto_hw *hw = vq->hw;
+	int i, size = vq->vq_nentries;
+	struct vring *vr = &vq->vq_ring;
+	uint8_t *ring_mem = vq->vq_ring_virt_mem;
+
+	PMD_INIT_FUNC_TRACE();
+
+	vring_crypto_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN);
+	vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1);
+	vq->vq_free_cnt = vq->vq_nentries;
+
+	/* Chain all the descriptors in the ring with an END */
+	for (i = 0; i < size - 1; i++)
+		vr->desc[i].next = (uint16_t)(i + 1);
+	vr->desc[i].next = VQ_RING_DESC_CHAIN_END;
+
+	/*
+	 * Disable device(host) interrupting guest
+	 */
+	virtqueue_crypto_disable_intr(vq);
+
+	/*
+	 * Set guest physical address of the virtqueue
+	 * in VIRTIO_PCI_QUEUE_PFN config register of device
+	 * to share with the backend
+	 */
+	if (VTPCI_OPS(hw)->setup_queue(hw, vq) < 0) {
+		PMD_INIT_LOG(ERR, "setup_queue failed");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+void
+virtio_crypto_cq_start(struct rte_cryptodev *dev)
+{
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	if (hw->cvq) {
+		virtio_crypto_vring_start(hw->cvq);
+		VIRTQUEUE_DUMP((struct virtqueue *)hw->cvq);
+	}
+}
+
+void
+virtio_crypto_dq_start(struct rte_cryptodev *dev)
+{
+	/*
+	 * Start data vrings
+	 * -	Setup vring structure for data queues
+	 */
+	uint32_t i;
+	struct virtio_crypto_hw *hw = dev->data->dev_private;
+
+	PMD_INIT_FUNC_TRACE();
+
+	/* Start data vring. */
+	for (i = 0; i < hw->max_dataqueues; i++) {
+		virtio_crypto_vring_start(dev->data->queue_pairs[i]);
+		VIRTQUEUE_DUMP((struct virtqueue *)dev->data->queue_pairs[i]);
+	}
+}
+
+/* vring size of data queue is 1024 */
+#define VIRTIO_MBUF_BURST_SZ 1024
+
+uint16_t
+virtio_crypto_pkt_rx_burst(void *tx_queue, struct rte_crypto_op **rx_pkts,
+		uint16_t nb_pkts)
+{
+	struct virtqueue *txvq = tx_queue;
+	uint16_t nb_used, num, nb_rx;
+
+	nb_used = VIRTQUEUE_NUSED(txvq);
+
+	virtio_rmb();
+
+	num = (uint16_t)(likely(nb_used <= nb_pkts) ? nb_used : nb_pkts);
+	num = (uint16_t)(likely(num <= VIRTIO_MBUF_BURST_SZ)
+		? num : VIRTIO_MBUF_BURST_SZ);
+
+	if (num == 0)
+		return 0;
+
+	nb_rx = virtqueue_crypto_dequeue_burst_rx(txvq, rx_pkts, num);
+	PMD_RX_LOG(DEBUG, "used:%d dequeue:%d", nb_used, num);
+
+	return nb_rx;
+}
+
+uint16_t
+virtio_crypto_pkt_tx_burst(void *tx_queue, struct rte_crypto_op **tx_pkts,
+		uint16_t nb_pkts)
+{
+	struct virtqueue *txvq;
+	uint16_t nb_tx;
+	int error;
+
+	if (unlikely(nb_pkts < 1))
+		return nb_pkts;
+	if (unlikely(tx_queue == NULL)) {
+		PMD_TX_LOG(ERR, "tx_queue is NULL");
+		return 0;
+	}
+	txvq = tx_queue;
+
+	PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts);
+
+	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
+		struct rte_mbuf *txm = tx_pkts[nb_tx]->sym->m_src;
+		/* nb_segs is always 1 at virtio crypto situation */
+		int need = txm->nb_segs - txvq->vq_free_cnt;
+
+		/*
+		 * Positive value indicates it hasn't enough space in vring
+		 * descriptors
+		 */
+		if (unlikely(need > 0)) {
+			/*
+			 * try it again because the receive process may be
+			 * free some space
+			 */
+			need = txm->nb_segs - txvq->vq_free_cnt;
+			if (unlikely(need > 0)) {
+				PMD_TX_LOG(ERR, "No free tx descriptors "
+						"to transmit");
+				break;
+			}
+		}
+
+		txvq->packets_sent_total++;
+
+		/* Enqueue Packet buffers */
+		error = virtqueue_crypto_enqueue_xmit(txvq, tx_pkts[nb_tx]);
+		if (unlikely(error)) {
+			if (error == ENOSPC)
+				PMD_TX_LOG(ERR,
+					"virtqueue_enqueue Free count = 0");
+			else if (error == EMSGSIZE)
+				PMD_TX_LOG(ERR,
+					"virtqueue_enqueue Free count < 1");
+			else
+				PMD_TX_LOG(ERR,
+					"virtqueue_enqueue error: %d", error);
+			txvq->packets_sent_failed++;
+			break;
+		}
+	}
+
+	if (likely(nb_tx)) {
+		vq_crypto_update_avail_idx(txvq);
+
+		if (unlikely(virtqueue_crypto_kick_prepare(txvq))) {
+			virtqueue_crypto_notify(txvq);
+			PMD_TX_LOG(DEBUG, "Notified backend after xmit");
+		}
+	}
+
+	return nb_tx;
+}
diff --git a/drivers/crypto/virtio/virtqueue.c b/drivers/crypto/virtio/virtqueue.c
new file mode 100644
index 0000000..be3a5a6
--- /dev/null
+++ b/drivers/crypto/virtio/virtqueue.c
@@ -0,0 +1,80 @@ 
+/*-
+ *     BSD LICENSE
+ *
+ *     Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *     All rights reserved.
+ *
+ *     Redistribution and use in source and binary forms, with or without
+ *     modification, are permitted provided that the following conditions
+ *     are met:
+ *
+ *       * Redistributions of source code must retain the above copyright
+ *    	 notice, this list of conditions and the following disclaimer.
+ *       * Redistributions in binary form must reproduce the above copyright
+ *    	 notice, this list of conditions and the following disclaimer in
+ *    	 the documentation and/or other materials provided with the
+ *    	 distribution.
+ *       * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *    	 from this software without specific prior written permission.
+ *
+ *     THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *     "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *     LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *     A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *     OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *     SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *     LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *     DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *     THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *     (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *     OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+#include <stdint.h>
+
+#include <rte_mbuf.h>
+#include <rte_crypto.h>
+#include <rte_malloc.h>
+
+#include "virtqueue.h"
+#include "virtio_logs.h"
+#include "virtio_pci.h"
+
+void
+virtqueue_crypto_disable_intr(struct virtqueue *vq)
+{
+	/*
+	 * Set VRING_AVAIL_F_NO_INTERRUPT to hint host
+	 * not to interrupt when it consumes packets
+	 * Note: this is only considered a hint to the host
+	 */
+	vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT;
+}
+
+void
+virtqueue_crypto_detatch_unused(struct virtqueue *vq)
+{
+	struct rte_crypto_op *cop = NULL;
+	uint8_t *malloc_addr = NULL;
+
+	int idx;
+
+	if (vq != NULL)
+		for (idx = 0; idx < vq->vq_nentries; idx++) {
+			cop = vq->vq_descx[idx].crypto_op;
+			if (cop) {
+				if (cop->sym->m_src)
+					rte_pktmbuf_free(cop->sym->m_src);
+				if (cop->sym->m_dst)
+					rte_pktmbuf_free(cop->sym->m_dst);
+				rte_crypto_op_free(cop);
+				vq->vq_descx[idx].crypto_op = NULL;
+			}
+
+			malloc_addr = vq->vq_descx[idx].malloc_addr;
+			if (malloc_addr) {
+				rte_free(malloc_addr);
+				vq->vq_descx[idx].malloc_addr = NULL;
+			}
+		}
+}
diff --git a/drivers/crypto/virtio/virtqueue.h b/drivers/crypto/virtio/virtqueue.h
new file mode 100644
index 0000000..d666c3a
--- /dev/null
+++ b/drivers/crypto/virtio/virtqueue.h
@@ -0,0 +1,208 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2017 HUAWEI TECHNOLOGIES CO., LTD. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of HUAWEI TECHNOLOGIES CO., LTD nor the names of
+ *       its contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _VIRTQUEUE_H_
+#define _VIRTQUEUE_H_
+
+#include <stdint.h>
+
+#include <rte_atomic.h>
+#include <rte_memory.h>
+#include <rte_memzone.h>
+#include <rte_mempool.h>
+
+#include "virtio_pci.h"
+#include "virtio_ring.h"
+#include "virtio_logs.h"
+#include "virtio_crypto.h"
+
+struct rte_mbuf;
+
+/*
+ * Per virtio_config.h in Linux.
+ *     For virtio_pci on SMP, we don't need to order with respect to MMIO
+ *     accesses through relaxed memory I/O windows, so smp_mb() et al are
+ *     sufficient.
+ *
+ */
+#define virtio_mb()	rte_smp_mb()
+#define virtio_rmb()	rte_smp_rmb()
+#define virtio_wmb()	rte_smp_wmb()
+
+#define VIRTQUEUE_MAX_NAME_SZ 32
+
+enum { VTCRYPTO_DATAQ = 0, VTCRYPTO_CTRLQ = 1 };
+
+/**
+ * The maximum virtqueue size is 2^15. Use that value as the end of
+ * descriptor chain terminator since it will never be a valid index
+ * in the descriptor table. This is used to verify we are correctly
+ * handling vq_free_cnt.
+ */
+#define VQ_RING_DESC_CHAIN_END 32768
+
+struct virtqueue {
+	/**< virtio_crypto_hw structure pointer. */
+	struct virtio_crypto_hw *hw;
+	/**< mem zone to populate RX ring. */
+	const struct rte_memzone *mz;
+	/**< memzone to populate hdr. */
+	struct rte_mempool *mpool;
+	uint8_t     dev_id;              /**< Device identifier. */
+	uint16_t    vq_queue_index;       /**< PCI queue index */
+
+	void        *vq_ring_virt_mem;    /**< linear address of vring*/
+	unsigned int vq_ring_size;
+	phys_addr_t vq_ring_mem;          /**< physical address of vring */
+
+	struct vring vq_ring;    /**< vring keeping desc, used and avail */
+	uint16_t    vq_free_cnt; /**< num of desc available */
+	uint16_t    vq_nentries; /**< vring desc numbers */
+	uint16_t    vq_free_thresh; /**< free threshold */
+	/**
+	 * Head of the free chain in the descriptor table. If
+	 * there are no free descriptors, this will be set to
+	 * VQ_RING_DESC_CHAIN_END.
+	 */
+	uint16_t  vq_desc_head_idx;
+	uint16_t  vq_desc_tail_idx;
+	/**
+	 * Last consumed descriptor in the used table,
+	 * trails vq_ring.used->idx.
+	 */
+	uint16_t vq_used_cons_idx;
+	uint16_t vq_avail_idx;
+
+	/* Statistics */
+	uint64_t	packets_sent_total;
+	uint64_t	packets_sent_failed;
+	uint64_t	packets_received_total;
+	uint64_t	packets_received_failed;
+
+	uint16_t  *notify_addr;
+
+	struct vq_desc_extra {
+		void     *crypto_op;
+		uint8_t  *malloc_addr;
+		uint16_t ndescs;
+	} vq_descx[0];
+};
+
+/* If multiqueue is provided by host, then we suppport it. */
+#define VIRTIO_CRYPTO_CTRL_MQ                     4
+#define VIRTIO_CRYPTO_CTRL_MQ_VQ_PAIRS_SET        0
+#define VIRTIO_CRYPTO_CTRL_MQ_VQ_PAIRS_MIN        1
+#define VIRTIO_CRYPTO_CTRL_MQ_VQ_PAIRS_MAX        0x8000
+
+/**
+ * Tell the backend not to interrupt us.
+ */
+void virtqueue_crypto_disable_intr(struct virtqueue *vq);
+/**
+ *  Dump virtqueue internal structures, for debug purpose only.
+ */
+void virtqueue_crypto_dump(struct virtqueue *vq);
+/**
+ *  Get all mbufs to be freed.
+ */
+void virtqueue_crypto_detatch_unused(struct virtqueue *vq);
+
+static inline int
+virtqueue_crypto_full(const struct virtqueue *vq)
+{
+	return vq->vq_free_cnt == 0;
+}
+
+#define VIRTQUEUE_NUSED(vq) \
+	((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx))
+
+static inline void
+vq_crypto_update_avail_idx(struct virtqueue *vq)
+{
+	virtio_wmb();
+	vq->vq_ring.avail->idx = vq->vq_avail_idx;
+}
+
+static inline void
+vq_crypto_update_avail_ring(struct virtqueue *vq, uint16_t desc_idx)
+{
+	uint16_t avail_idx;
+	/*
+	 * Place the head of the descriptor chain into the next slot and make
+	 * it usable to the host. The chain is made available now rather than
+	 * deferring to virtqueue_notify() in the hopes that if the host is
+	 * currently running on another CPU, we can keep it processing the new
+	 * descriptor.
+	 */
+	avail_idx = (uint16_t)(vq->vq_avail_idx & (vq->vq_nentries - 1));
+	vq->vq_ring.avail->ring[avail_idx] = desc_idx;
+	vq->vq_avail_idx++;
+}
+
+static inline int
+virtqueue_crypto_kick_prepare(struct virtqueue *vq)
+{
+	return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY);
+}
+
+static inline void
+virtqueue_crypto_notify(struct virtqueue *vq)
+{
+	struct virtio_crypto_hw *hw = vq->hw;
+	/*
+	 * Ensure updated avail->idx is visible to host.
+	 * For virtio on IA, the notificaiton is through io port operation
+	 * which is a serialization instruction itself.
+	 */
+	VTPCI_OPS(hw)->notify_queue(hw, vq);
+}
+
+#ifdef RTE_LIBRTE_PMD_VIRTIO_CRYPTO_DEBUG_DUMP
+#define VIRTQUEUE_DUMP(vq) do { \
+	uint16_t used_idx, nused; \
+	used_idx = (vq)->vq_ring.used->idx; \
+	nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \
+	PMD_INIT_LOG(DEBUG, \
+	  "VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \
+	  " avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \
+	  " avail.flags=0x%x; used.flags=0x%x", \
+	  (vq)->vq_nentries, (vq)->vq_free_cnt, nused, \
+	  (vq)->vq_desc_head_idx, (vq)->vq_ring.avail->idx, \
+	  (vq)->vq_used_cons_idx, (vq)->vq_ring.used->idx, \
+	  (vq)->vq_ring.avail->flags, (vq)->vq_ring.used->flags); \
+} while (0)
+#else
+#define VIRTQUEUE_DUMP(vq) do { } while (0)
+#endif
+
+#endif /* _VIRTQUEUE_H_ */
diff --git a/mk/rte.app.mk b/mk/rte.app.mk
index 047121d..56d7c51 100644
--- a/mk/rte.app.mk
+++ b/mk/rte.app.mk
@@ -169,6 +169,7 @@  _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_AESNI_GCM)   += -L$(AESNI_MULTI_BUFFER_LIB_PATH)
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_OPENSSL)     += -lrte_pmd_openssl -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL_CRYPTO) += -lrte_pmd_null_crypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_QAT)         += -lrte_pmd_qat -lcrypto
+_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += -lrte_pmd_virtio_crypto -lcrypto
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)      += -lrte_pmd_snow3g
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_SNOW3G)      += -L$(LIBSSO_SNOW3G_PATH)/build -lsso_snow3g
 _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_KASUMI)      += -lrte_pmd_kasumi
diff --git a/test/test/test_cryptodev.c b/test/test/test_cryptodev.c
index 72988c5..2d03d32 100644
--- a/test/test/test_cryptodev.c
+++ b/test/test/test_cryptodev.c
@@ -1790,6 +1790,25 @@  struct crypto_unittest_params {
 }
 
 static int
+test_AES_cipheronly_virtio_all(void)
+{
+	struct crypto_testsuite_params *ts_params = &testsuite_params;
+	int status;
+
+	status = test_blockcipher_all_tests(ts_params->mbuf_pool,
+		ts_params->op_mpool,
+		ts_params->session_mpool,
+		ts_params->valid_devs[0],
+		rte_cryptodev_driver_id_get(
+		RTE_STR(CRYPTODEV_NAME_VIRTIO_SYM_PMD)),
+		BLKCIPHER_AES_CIPHERONLY_TYPE);
+
+	TEST_ASSERT_EQUAL(status, 0, "Test failed");
+
+	return TEST_SUCCESS;
+}
+
+static int
 test_AES_chain_dpaa_sec_all(void)
 {
 	struct crypto_testsuite_params *ts_params = &testsuite_params;
@@ -8732,6 +8751,18 @@  struct test_crypto_vector {
 	}
 };
 
+static struct unit_test_suite cryptodev_virtio_testsuite  = {
+	.suite_name = "Crypto VIRTIO Unit Test Suite",
+	.setup = testsuite_setup,
+	.teardown = testsuite_teardown,
+	.unit_test_cases = {
+		TEST_CASE_ST(ut_setup, ut_teardown,
+				test_AES_cipheronly_virtio_all),
+
+		TEST_CASES_END() /**< NULL terminate unit test array */
+	}
+};
+
 static struct unit_test_suite cryptodev_aesni_mb_testsuite  = {
 	.suite_name = "Crypto Device AESNI MB Unit Test Suite",
 	.setup = testsuite_setup,
@@ -9597,6 +9628,23 @@  struct test_crypto_vector {
 }
 
 static int
+test_cryptodev_virtio(void /*argv __rte_unused, int argc __rte_unused*/)
+{
+	gbl_driver_id =	rte_cryptodev_driver_id_get(
+			RTE_STR(CRYPTODEV_NAME_VIRTIO_SYM_PMD));
+
+	if (gbl_driver_id == -1) {
+		RTE_LOG(ERR, USER1, "VIRTIO PMD must be loaded. Check if "
+				"CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO is enabled "
+				"in config file to run this testsuite.\n");
+		return TEST_FAILED;
+	}
+
+	return unit_test_suite_runner(&cryptodev_virtio_testsuite);
+}
+
+
+static int
 test_cryptodev_aesni_mb(void /*argv __rte_unused, int argc __rte_unused*/)
 {
 	gbl_driver_id =	rte_cryptodev_driver_id_get(
@@ -9812,3 +9860,4 @@  struct test_crypto_vector {
 REGISTER_TEST_COMMAND(cryptodev_sw_mrvl_autotest, test_cryptodev_mrvl);
 REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec);
 REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
+REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio);
diff --git a/test/test/test_cryptodev.h b/test/test/test_cryptodev.h
index 2e9eb0b..3ca9918 100644
--- a/test/test/test_cryptodev.h
+++ b/test/test/test_cryptodev.h
@@ -89,6 +89,7 @@ 
 #define CRYPTODEV_NAME_DPAA2_SEC_PMD	crypto_dpaa2_sec
 #define CRYPTODEV_NAME_SCHEDULER_PMD	crypto_scheduler
 #define CRYPTODEV_NAME_MRVL_PMD		crypto_mrvl
+#define CRYPTODEV_NAME_VIRTIO_SYM_PMD   crypto_virtio
 
 /**
  * Write (spread) data from buffer to mbuf data
diff --git a/test/test/test_cryptodev_aes_test_vectors.h b/test/test/test_cryptodev_aes_test_vectors.h
index 9c13041..e07ca8c 100644
--- a/test/test/test_cryptodev_aes_test_vectors.h
+++ b/test/test/test_cryptodev_aes_test_vectors.h
@@ -1277,7 +1277,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-128-CBC HMAC-SHA1 Encryption Digest "
@@ -1541,7 +1542,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-128-CBC Decryption",
@@ -1553,7 +1555,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-192-CBC Encryption",
@@ -1564,7 +1567,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC
+			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-192-CBC Encryption Scater gather",
@@ -1583,7 +1587,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC
+			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-256-CBC Encryption",
@@ -1595,7 +1600,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-256-CBC Decryption",
@@ -1607,7 +1613,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-256-CBC OOP Encryption",
@@ -1617,7 +1624,8 @@ 
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC
+			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-256-CBC OOP Decryption",
@@ -1627,7 +1635,8 @@ 
 		.pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_OPENSSL |
 			BLOCKCIPHER_TEST_TARGET_PMD_QAT |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC
+			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-128-CTR Encryption",
@@ -1639,7 +1648,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-128-CTR Decryption",
@@ -1651,7 +1661,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-192-CTR Encryption",
@@ -1662,7 +1673,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC
+			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-192-CTR Decryption",
@@ -1673,7 +1685,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_MB |
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC
+			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-256-CTR Encryption",
@@ -1685,7 +1698,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-256-CTR Decryption",
@@ -1697,7 +1711,8 @@ 
 			BLOCKCIPHER_TEST_TARGET_PMD_SCHEDULER |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC |
 			BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC |
-			BLOCKCIPHER_TEST_TARGET_PMD_MRVL
+			BLOCKCIPHER_TEST_TARGET_PMD_MRVL |
+			BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO
 	},
 	{
 		.test_descr = "AES-128-CTR Encryption (12-byte IV)",
diff --git a/test/test/test_cryptodev_blockcipher.c b/test/test/test_cryptodev_blockcipher.c
index 9a9fd8b..195180f 100644
--- a/test/test/test_cryptodev_blockcipher.c
+++ b/test/test/test_cryptodev_blockcipher.c
@@ -96,6 +96,8 @@ 
 			RTE_STR(CRYPTODEV_NAME_DPAA_SEC_PMD));
 	int mrvl_pmd = rte_cryptodev_driver_id_get(
 			RTE_STR(CRYPTODEV_NAME_MRVL_PMD));
+	int virtio_pmd = rte_cryptodev_driver_id_get(
+            RTE_STR(CRYPTODEV_NAME_VIRTIO_SYM_PMD));
 
 	int nb_segs = 1;
 
@@ -122,7 +124,8 @@ 
 			driver_id == qat_pmd ||
 			driver_id == openssl_pmd ||
 			driver_id == armv8_pmd ||
-			driver_id == mrvl_pmd) { /* Fall through */
+			driver_id == mrvl_pmd ||
+			driver_id == virtio_pmd) { /* Fall through */
 		digest_len = tdata->digest.len;
 	} else if (driver_id == aesni_mb_pmd ||
 			driver_id == scheduler_pmd) {
@@ -597,6 +600,8 @@ 
 			RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD));
 	int mrvl_pmd = rte_cryptodev_driver_id_get(
 			RTE_STR(CRYPTODEV_NAME_MRVL_PMD));
+    int virtio_pmd = rte_cryptodev_driver_id_get(
+            RTE_STR(CRYPTODEV_NAME_VIRTIO_SYM_PMD));
 
 	switch (test_type) {
 	case BLKCIPHER_AES_CHAIN_TYPE:
@@ -659,6 +664,8 @@ 
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC;
 	else if (driver_id == mrvl_pmd)
 		target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_MRVL;
+    else if (driver_id == virtio_pmd)
+        target_pmd_mask = BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO;
 	else
 		TEST_ASSERT(0, "Unrecognized cryptodev type");
 
diff --git a/test/test/test_cryptodev_blockcipher.h b/test/test/test_cryptodev_blockcipher.h
index 67c78d4..fadb81d 100644
--- a/test/test/test_cryptodev_blockcipher.h
+++ b/test/test/test_cryptodev_blockcipher.h
@@ -55,6 +55,7 @@ 
 #define BLOCKCIPHER_TEST_TARGET_PMD_DPAA2_SEC	0x0020 /* DPAA2_SEC flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_DPAA_SEC	0x0040 /* DPAA_SEC flag */
 #define BLOCKCIPHER_TEST_TARGET_PMD_MRVL	0x0080 /* Marvell flag */
+#define BLOCKCIPHER_TEST_TARGET_PMD_VIRTIO	0x0100 /* VIRTIO flag */
 
 #define BLOCKCIPHER_TEST_OP_CIPHER	(BLOCKCIPHER_TEST_OP_ENCRYPT | \
 					BLOCKCIPHER_TEST_OP_DECRYPT)