{"resultsPerPage":1,"startIndex":0,"totalResults":1,"format":"NVD_CVE","version":"2.0","timestamp":"2026-05-09T05:55:45.761","vulnerabilities":[{"cve":{"id":"CVE-2026-23113","sourceIdentifier":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","published":"2026-02-14T15:16:06.380","lastModified":"2026-04-18T09:16:14.100","vulnStatus":"Modified","cveTags":[],"descriptions":[{"lang":"en","value":"In the Linux kernel, the following vulnerability has been resolved:\n\nio_uring/io-wq: check IO_WQ_BIT_EXIT inside work run loop\n\nCurrently this is checked before running the pending work. Normally this\nis quite fine, as work items either end up blocking (which will create a\nnew worker for other items), or they complete fairly quickly. But syzbot\nreports an issue where io-wq takes seemingly forever to exit, and with a\nbit of debugging, this turns out to be because it queues a bunch of big\n(2GB - 4096b) reads with a /dev/msr* file. Since this file type doesn't\nsupport ->read_iter(), loop_rw_iter() ends up handling them. Each read\nreturns 16MB of data read, which takes 20 (!!) seconds. With a bunch of\nthese pending, processing the whole chain can take a long time. Easily\nlonger than the syzbot uninterruptible sleep timeout of 140 seconds.\nThis then triggers a complaint off the io-wq exit path:\n\nINFO: task syz.4.135:6326 blocked for more than 143 seconds.\n      Not tainted syzkaller #0\n      Blocked by coredump.\n\"echo 0 > /proc/sys/kernel/hung_task_timeout_secs\" disables this message.\ntask:syz.4.135       state:D stack:26824 pid:6326  tgid:6324  ppid:5957   task_flags:0x400548 flags:0x00080000\nCall Trace:\n <TASK>\n context_switch kernel/sched/core.c:5256 [inline]\n __schedule+0x1139/0x6150 kernel/sched/core.c:6863\n __schedule_loop kernel/sched/core.c:6945 [inline]\n schedule+0xe7/0x3a0 kernel/sched/core.c:6960\n schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75\n do_wait_for_common kernel/sched/completion.c:100 [inline]\n __wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121\n io_wq_exit_workers io_uring/io-wq.c:1328 [inline]\n io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356\n io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203\n io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651\n io_uring_files_cancel include/linux/io_uring.h:19 [inline]\n do_exit+0x2ce/0x2bd0 kernel/exit.c:911\n do_group_exit+0xd3/0x2a0 kernel/exit.c:1112\n get_signal+0x2671/0x26d0 kernel/signal.c:3034\n arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337\n __exit_to_user_mode_loop kernel/entry/common.c:41 [inline]\n exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75\n __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]\n syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]\n syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline]\n syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline]\n do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100\n entry_SYSCALL_64_after_hwframe+0x77/0x7f\nRIP: 0033:0x7fa02738f749\nRSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca\nRAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749\nRDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098\nRBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000\nR10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000\nR13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98\n\nThere's really nothing wrong here, outside of processing these reads\nwill take a LONG time. However, we can speed up the exit by checking the\nIO_WQ_BIT_EXIT inside the io_worker_handle_work() loop, as syzbot will\nexit the ring after queueing up all of these reads. Then once the first\nitem is processed, io-wq will simply cancel the rest. That should avoid\nsyzbot running into this complaint again."},{"lang":"es","value":"En el kernel de Linux, la siguiente vulnerabilidad ha sido resuelta:\n\nio_uring/io-wq: verificar IO_WQ_BIT_EXIT dentro del bucle de ejecución de trabajo\n\nActualmente esto se verifica antes de ejecutar el trabajo pendiente. Normalmente esto está bastante bien, ya que los elementos de trabajo o terminan bloqueándose (lo que creará un nuevo trabajador para otros elementos), o se completan bastante rápido. Pero syzbot informa un problema donde io-wq tarda aparentemente una eternidad en salir, y con un poco de depuración, esto resulta ser porque encola un montón de lecturas grandes (2GB - 4096b) con un archivo /dev/msr*. Dado que este tipo de archivo no soporta -&gt;read_iter(), loop_rw_iter() termina manejándolos. Cada lectura devuelve 16MB de datos leídos, lo que toma 20 (!!) segundos. Con un montón de estos pendientes, procesar toda la cadena puede tomar mucho tiempo. Fácilmente más largo que el tiempo de espera de suspensión ininterrumpible de syzbot de 140 segundos. Esto entonces desencadena una queja de la ruta de salida de io-wq:\n\nINFO: tarea syz.4.135:6326 bloqueada por más de 143 segundos.\n      No contaminado syzkaller #0\n      Bloqueado por coredump.\n'echo 0 &gt; /proc/sys/kernel/hung_task_timeout_secs' deshabilita este mensaje.\ntask:syz.4.135       estado:D stack:26824 pid:6326  tgid:6324  ppid:5957   task_flags:0x400548 flags:0x00080000\nTraza de Llamada:\n \n context_switch kernel/sched/core.c:5256 [en línea]\n __schedule+0x1139/0x6150 kernel/sched/core.c:6863\n __schedule_loop kernel/sched/core.c:6945 [en línea]\n schedule+0xe7/0x3a0 kernel/sched/core.c:6960\n schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75\n do_wait_for_common kernel/sched/completion.c:100 [en línea]\n __wait_for_common+0x2fc/0x4e0 kernel/sched/completion.c:121\n io_wq_exit_workers io_uring/io-wq.c:1328 [en línea]\n io_wq_put_and_exit+0x271/0x8a0 io_uring/io-wq.c:1356\n io_uring_clean_tctx+0x10d/0x190 io_uring/tctx.c:203\n io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:651\n io_uring_files_cancel include/linux/io_uring.h:19 [en línea]\n do_exit+0x2ce/0x2bd0 kernel/exit.c:911\n do_group_exit+0xd3/0x2a0 kernel/exit.c:1112\n get_signal+0x2671/0x26d0 kernel/signal.c:3034\n arch_do_signal_or_restart+0x8f/0x7e0 arch/x86/kernel/signal.c:337\n __exit_to_user_mode_loop kernel/entry/common.c:41 [en línea]\n exit_to_user_mode_loop+0x8c/0x540 kernel/entry/common.c:75\n __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [en línea]\n syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [en línea]\n syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [en línea]\n syscall_exit_to_user_mode include/linux/entry-common.h:194 [en línea]\n do_syscall_64+0x4ee/0xf80 arch/x86/entry/syscall_64.c:100\n entry_SYSCALL_64_after_hwframe+0x77/0x7f\nRIP: 0033:0x7fa02738f749\nRSP: 002b:00007fa0281ae0e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca\nRAX: fffffffffffffe00 RBX: 00007fa0275e6098 RCX: 00007fa02738f749\nRDX: 0000000000000000 RSI: 0000000000000080 RDI: 00007fa0275e6098\nRBP: 00007fa0275e6090 R08: 0000000000000000 R09: 0000000000000000\nR10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000\nR13: 00007fa0275e6128 R14: 00007fff14e4fcb0 R15: 00007fff14e4fd98\n\nRealmente no hay nada malo aquí, aparte de que procesar estas lecturas tomará MUCHO tiempo. Sin embargo, podemos acelerar la salida verificando el IO_WQ_BIT_EXIT dentro del bucle io_worker_handle_work(), ya que syzbot saldrá del anillo después de encolar todas estas lecturas. Luego, una vez que se procesa el primer elemento, io-wq simplemente cancelará el resto. Eso debería evitar que syzbot se encuentre con esta queja de nuevo."}],"metrics":{"cvssMetricV31":[{"source":"nvd@nist.gov","type":"Primary","cvssData":{"version":"3.1","vectorString":"CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H","baseScore":5.5,"baseSeverity":"MEDIUM","attackVector":"LOCAL","attackComplexity":"LOW","privilegesRequired":"LOW","userInteraction":"NONE","scope":"UNCHANGED","confidentialityImpact":"NONE","integrityImpact":"NONE","availabilityImpact":"HIGH"},"exploitabilityScore":1.8,"impactScore":3.6}]},"weaknesses":[{"source":"nvd@nist.gov","type":"Primary","description":[{"lang":"en","value":"NVD-CWE-noinfo"}]}],"configurations":[{"nodes":[{"operator":"OR","negate":false,"cpeMatch":[{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*","versionStartIncluding":"5.12.1","versionEndExcluding":"6.6.122","matchCriteriaId":"405C58E5-6DB2-4F7A-9D83-41C17D9DA63B"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*","versionStartIncluding":"6.7","versionEndExcluding":"6.12.68","matchCriteriaId":"52F38E19-0FDD-4992-9D6D-D4169D689598"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*","versionStartIncluding":"6.13","versionEndExcluding":"6.18.8","matchCriteriaId":"E65C6E79-7EBE-4C77-93F0-818CF5B38F4E"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:5.12:-:*:*:*:*:*:*","matchCriteriaId":"75EB504D-4A83-4C67-9C8D-FD9C6C8EB4CD"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:5.12:rc7:*:*:*:*:*:*","matchCriteriaId":"06AE06A6-A0C3-4556-BFFA-3D6E4BAC43C8"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:5.12:rc8:*:*:*:*:*:*","matchCriteriaId":"FCE63934-38CF-4311-AD72-624E86AF3889"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:6.19:rc1:*:*:*:*:*:*","matchCriteriaId":"17B67AA7-40D6-4AFA-8459-F200F3D7CFD1"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:6.19:rc2:*:*:*:*:*:*","matchCriteriaId":"C47E4CC9-C826-4FA9-B014-7FE3D9B318B2"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:6.19:rc3:*:*:*:*:*:*","matchCriteriaId":"F71D92C0-C023-48BD-B3B6-70B638EEE298"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:6.19:rc4:*:*:*:*:*:*","matchCriteriaId":"13580667-0A98-40CC-B29F-D12790B91BDB"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:6.19:rc5:*:*:*:*:*:*","matchCriteriaId":"CAD1FED7-CF48-47BF-AC7D-7B6FA3C065FC"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:6.19:rc6:*:*:*:*:*:*","matchCriteriaId":"3EF854A1-ABB1-4E93-BE9A-44569EC76C0D"}]}]}],"references":[{"url":"https://git.kernel.org/stable/c/10dc959398175736e495f71c771f8641e1ca1907","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","tags":["Patch"]},{"url":"https://git.kernel.org/stable/c/27e47500fac23d15b7dc93ff650bc4844d2581bd","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67"},{"url":"https://git.kernel.org/stable/c/2e8ca1078b14142db2ce51cbd18ff9971560046b","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","tags":["Patch"]},{"url":"https://git.kernel.org/stable/c/85eb83694a91c89d9abe615d717c0053c3efa714","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","tags":["Patch"]},{"url":"https://git.kernel.org/stable/c/bdf0bf73006ea8af9327cdb85cfdff4c23a5f966","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","tags":["Patch"]},{"url":"https://git.kernel.org/stable/c/d05d99573f81a091547b1778b9a50120f5d6c68a","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67"}]}}]}