qemu/aio-wait.h-introduce-AIO_WAIT_WHILE_UNLOCKED.patch
Jiabo Feng 8a522bdd9f QEMU update to version 6.2.0-97:
- nbd/server: CVE-2024-7409: Close stray clients at server-stop
- main-loop.h: introduce qemu_in_main_thread()
- aio-wait.h: introduce AIO_WAIT_WHILE_UNLOCKED
- nbd/server: CVE-2024-7409: Drop non-negotiating clients
- nbd/server: CVE-2024-7409: Cap default max-connections to 100
- nbd/server: Plumb in new args to nbd_client_add()
- nbd: Minor style and typo fixes

Signed-off-by: Jiabo Feng <fengjiabo1@huawei.com>
(cherry picked from commit 5e30f8e310a15452f86723a0ae459d303cc29470)
2024-08-13 17:29:20 +08:00

80 lines
3.5 KiB
Diff

From 0943a32cdfc513c7a845ecd4dabba06fb1ca46a6 Mon Sep 17 00:00:00 2001
From: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Date: Mon, 26 Sep 2022 05:31:57 -0400
Subject: [PATCH 5/7] aio-wait.h: introduce AIO_WAIT_WHILE_UNLOCKED
Same as AIO_WAIT_WHILE macro, but if we are in the Main loop
do not release and then acquire ctx_ 's aiocontext.
Once all Aiocontext locks go away, this macro will replace
AIO_WAIT_WHILE.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-Id: <20220926093214.506243-5-eesposit@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
---
include/block/aio-wait.h | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/include/block/aio-wait.h b/include/block/aio-wait.h
index b39eefb38d..e4b811433d 100644
--- a/include/block/aio-wait.h
+++ b/include/block/aio-wait.h
@@ -59,10 +59,13 @@ typedef struct {
extern AioWait global_aio_wait;
/**
- * AIO_WAIT_WHILE:
+ * AIO_WAIT_WHILE_INTERNAL:
* @ctx: the aio context, or NULL if multiple aio contexts (for which the
* caller does not hold a lock) are involved in the polling condition.
* @cond: wait while this conditional expression is true
+ * @unlock: whether to unlock and then lock again @ctx. This apples
+ * only when waiting for another AioContext from the main loop.
+ * Otherwise it's ignored.
*
* Wait while a condition is true. Use this to implement synchronous
* operations that require event loop activity.
@@ -75,7 +78,7 @@ extern AioWait global_aio_wait;
* wait on conditions between two IOThreads since that could lead to deadlock,
* go via the main loop instead.
*/
-#define AIO_WAIT_WHILE(ctx, cond) ({ \
+#define AIO_WAIT_WHILE_INTERNAL(ctx, cond, unlock) ({ \
bool waited_ = false; \
AioWait *wait_ = &global_aio_wait; \
AioContext *ctx_ = (ctx); \
@@ -90,11 +93,11 @@ extern AioWait global_aio_wait;
assert(qemu_get_current_aio_context() == \
qemu_get_aio_context()); \
while ((cond)) { \
- if (ctx_) { \
+ if (unlock && ctx_) { \
aio_context_release(ctx_); \
} \
aio_poll(qemu_get_aio_context(), true); \
- if (ctx_) { \
+ if (unlock && ctx_) { \
aio_context_acquire(ctx_); \
} \
waited_ = true; \
@@ -103,6 +106,12 @@ extern AioWait global_aio_wait;
qatomic_dec(&wait_->num_waiters); \
waited_; })
+#define AIO_WAIT_WHILE(ctx, cond) \
+ AIO_WAIT_WHILE_INTERNAL(ctx, cond, true)
+
+#define AIO_WAIT_WHILE_UNLOCKED(ctx, cond) \
+ AIO_WAIT_WHILE_INTERNAL(ctx, cond, false)
+
/**
* aio_wait_kick:
* Wake up the main thread if it is waiting on AIO_WAIT_WHILE(). During
--
2.45.1.windows.1