Skip to content
Snippets Groups Projects
  1. Mar 13, 2023
  2. Mar 07, 2023
  3. Mar 02, 2023
    • Alex Bennée's avatar
      backends/vhost-user: remove the ioeventfd check · e1a0e635
      Alex Bennée authored
      
      While ioeventfds are needed for good performance with KVM guests it
      should not be a gating requirement. We can run vhost-user backends using
      simulated ioeventfds or inband signalling.
      
      With this change I can run:
      
        $QEMU $OPTS \
          -display gtk,gl=on \
          -device vhost-user-gpu-pci,chardev=vhgpu \
          -chardev socket,id=vhgpu,path=vhgpu.sock
      
      with:
      
        ./contrib/vhost-user-gpu/vhost-user-gpu \
          -s vhgpu.sock \
          -v
      
      and at least see things start-up - although the display gets rotated by
      180 degrees. Once lightdm takes over we never make it to the login
      prompt and just get a blank screen.
      
      Signed-off-by: default avatarAlex Bennée <alex.bennee@linaro.org>
      Cc: Gerd Hoffmann <kraxel@redhat.com>
      Message-Id: <20221202132231.1048669-1-alex.bennee@linaro.org>
      
      Message-Id: <20230130124728.175610-1-alex.bennee@linaro.org>
      Reviewed-by: default avatarStefan Hajnoczi <stefanha@redhat.com>
      Reviewed-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      e1a0e635
  4. Feb 23, 2023
  5. Feb 08, 2023
  6. Jan 16, 2023
  7. Dec 28, 2022
    • Michal Privoznik's avatar
      hostmem: Honor multiple preferred nodes if possible · 6bb613f0
      Michal Privoznik authored
      
      If a memory-backend is configured with mode
      HOST_MEM_POLICY_PREFERRED then
      host_memory_backend_memory_complete() calls mbind() as:
      
        mbind(..., MPOL_PREFERRED, nodemask, ...);
      
      Here, 'nodemask' is a bitmap of host NUMA nodes and corresponds
      to the .host-nodes attribute. Therefore, there can be multiple
      nodes specified. However, the documentation to MPOL_PREFERRED
      says:
      
        MPOL_PREFERRED
          This mode sets the preferred node for allocation. ...
          If nodemask specifies more than one node ID, the first node
          in the mask will be selected as the preferred node.
      
      Therefore, only the first node is honored and the rest is
      silently ignored. Well, with recent changes to the kernel and
      numactl we can do better.
      
      The Linux kernel added in v5.15 via commit cfcaa66f8032
      ("mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY")
      support for MPOL_PREFERRED_MANY, which accepts multiple preferred
      NUMA nodes instead.
      
      Then, numa_has_preferred_many() API was introduced to numactl
      (v2.0.15~26) allowing applications to query kernel support.
      
      Wiring this all together, we can pass MPOL_PREFERRED_MANY to the
      mbind() call instead and stop ignoring multiple nodes, silently.
      
      Signed-off-by: default avatarMichal Privoznik <mprivozn@redhat.com>
      Message-Id: <a0b4adce1af5bd2344c2218eb4a04b3ff7bcfdb4.1671097918.git.mprivozn@redhat.com>
      Reviewed-by: default avatarDavid Hildenbrand <david@redhat.com>
      Signed-off-by: default avatarDavid Hildenbrand <david@redhat.com>
      6bb613f0
  8. Dec 14, 2022
    • Markus Armbruster's avatar
      qapi tpm: Elide redundant has_FOO in generated C · ced29396
      Markus Armbruster authored
      
      The has_FOO for pointer-valued FOO are redundant, except for arrays.
      They are also a nuisance to work with.  Recent commit "qapi: Start to
      elide redundant has_FOO in generated C" provided the means to elide
      them step by step.  This is the step for qapi/tpm.json.
      
      Said commit explains the transformation in more detail.  The invariant
      violations mentioned there do not occur here.
      
      Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
      Signed-off-by: default avatarMarkus Armbruster <armbru@redhat.com>
      Reviewed-by: default avatarStefan Berger <stefanb@linux.ibm.com>
      Message-Id: <20221104160712.3005652-26-armbru@redhat.com>
      ced29396
  9. Dec 01, 2022
  10. Nov 02, 2022
  11. Oct 27, 2022
  12. Sep 13, 2022
  13. Sep 09, 2022
  14. Aug 26, 2022
  15. Aug 25, 2022
  16. Aug 18, 2022
    • Priyankar Jain's avatar
      dbus-vmstate: Restrict error checks to registered proxies in dbus_get_proxies · 27485832
      Priyankar Jain authored
      
      The purpose of dbus_get_proxies to construct the proxies corresponding to the
      IDs registered to dbus-vmstate.
      
      Currenty, this function returns an error in case there is any failure
      while instantiating proxy for "all" the names on dbus.
      
      Ideally this function should error out only if it is not able to find and
      validate the proxies registered to the backend otherwise any offending
      process(for eg: the process purposefully may not export its Id property on
      the dbus) may connect to the dbus and can lead to migration failures.
      
      This commit ensures that dbus_get_proxies returns an error if it is not
      able to find and validate the proxies of interest(the IDs registered
      during the dbus-vmstate instantiation).
      
      Signed-off-by: default avatarPriyankar Jain <priyankar.jain@nutanix.com>
      Reviewed-by: default avatarMarc-André Lureau <marcandre.lureau@redhat.com>
      Message-Id: <1637936117-37977-1-git-send-email-priyankar.jain@nutanix.com>
      27485832
  17. Jun 16, 2022
    • Zhenwei Pi's avatar
      crypto: Introduce RSA algorithm · 0e660a6f
      Zhenwei Pi authored
      
      There are two parts in this patch:
      1, support akcipher service by cryptodev-builtin driver
      2, virtio-crypto driver supports akcipher service
      
      In principle, we should separate this into two patches, to avoid
      compiling error, merge them into one.
      
      Then virtio-crypto gets request from guest side, and forwards the
      request to builtin driver to handle it.
      
      Test with a guest linux:
      1, The self-test framework of crypto layer works fine in guest kernel
      2, Test with Linux guest(with asym support), the following script
      test(note that pkey_XXX is supported only in a newer version of keyutils):
        - both public key & private key
        - create/close session
        - encrypt/decrypt/sign/verify basic driver operation
        - also test with kernel crypto layer(pkey add/query)
      
      All the cases work fine.
      
      Run script in guest:
      rm -rf *.der *.pem *.pfx
      modprobe pkcs8_key_parser # if CONFIG_PKCS8_PRIVATE_KEY_PARSER=m
      rm -rf /tmp/data
      dd if=/dev/random of=/tmp/data count=1 bs=20
      
      openssl req -nodes -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -subj "/C=CN/ST=BJ/L=HD/O=qemu/OU=dev/CN=qemu/emailAddress=qemu@qemu.org"
      openssl pkcs8 -in key.pem -topk8 -nocrypt -outform DER -out key.der
      openssl x509 -in cert.pem -inform PEM -outform DER -out cert.der
      
      PRIV_KEY_ID=`cat key.der | keyctl padd asymmetric test_priv_key @s`
      echo "priv key id = "$PRIV_KEY_ID
      PUB_KEY_ID=`cat cert.der | keyctl padd asymmetric test_pub_key @s`
      echo "pub key id = "$PUB_KEY_ID
      
      keyctl pkey_query $PRIV_KEY_ID 0
      keyctl pkey_query $PUB_KEY_ID 0
      
      echo "Enc with priv key..."
      keyctl pkey_encrypt $PRIV_KEY_ID 0 /tmp/data enc=pkcs1 >/tmp/enc.priv
      echo "Dec with pub key..."
      keyctl pkey_decrypt $PRIV_KEY_ID 0 /tmp/enc.priv enc=pkcs1 >/tmp/dec
      cmp /tmp/data /tmp/dec
      
      echo "Sign with priv key..."
      keyctl pkey_sign $PRIV_KEY_ID 0 /tmp/data enc=pkcs1 hash=sha1 > /tmp/sig
      echo "Verify with pub key..."
      keyctl pkey_verify $PRIV_KEY_ID 0 /tmp/data /tmp/sig enc=pkcs1 hash=sha1
      
      echo "Enc with pub key..."
      keyctl pkey_encrypt $PUB_KEY_ID 0 /tmp/data enc=pkcs1 >/tmp/enc.pub
      echo "Dec with priv key..."
      keyctl pkey_decrypt $PRIV_KEY_ID 0 /tmp/enc.pub enc=pkcs1 >/tmp/dec
      cmp /tmp/data /tmp/dec
      
      echo "Verify with pub key..."
      keyctl pkey_verify $PUB_KEY_ID 0 /tmp/data /tmp/sig enc=pkcs1 hash=sha1
      
      Reviewed-by: default avatarGonglei <arei.gonglei@huawei.com>
      Signed-off-by: default avatarlei he <helei.sig11@bytedance.com>
      Signed-off-by: default avatarzhenwei pi <pizhenwei@bytedance.com>
      Message-Id: <20220611064243.24535-2-pizhenwei@bytedance.com>
      Reviewed-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      Signed-off-by: default avatarMichael S. Tsirkin <mst@redhat.com>
      0e660a6f
  18. May 23, 2022
    • Jaroslav Jindrak's avatar
      hostmem: default the amount of prealloc-threads to smp-cpus · f8d426a6
      Jaroslav Jindrak authored
      
      Prior to the introduction of the prealloc-threads property, the amount
      of threads used to preallocate memory was derived from the value of
      smp-cpus passed to qemu, the amount of physical cpus of the host
      and a hardcoded maximum value. When the prealloc-threads property
      was introduced, it included a default of 1 in backends/hostmem.c and
      a default of smp-cpus using the sugar API for the property itself. The
      latter default is not used when the property is not specified on qemu's
      command line, so guests that were not adjusted for this change suddenly
      started to use the default of 1 thread to preallocate memory, which
      resulted in observable slowdowns in guest boots for guests with large
      memory (e.g. when using libvirt <8.2.0 or managing guests manually).
      
      This commit restores the original behavior for these cases while not
      impacting guests started with the prealloc-threads property in any way.
      
      Fixes: 220c1fd864e9d ("hostmem: introduce "prealloc-threads" property")
      Signed-off-by: default avatarJaroslav Jindrak <dzejrou@gmail.com>
      Message-Id: <20220517123858.7933-1-dzejrou@gmail.com>
      Signed-off-by: default avatarPaolo Bonzini <pbonzini@redhat.com>
      f8d426a6
  19. May 14, 2022
  20. May 07, 2022
  21. Apr 28, 2022
  22. Apr 06, 2022
Loading