{"resultsPerPage":1,"startIndex":0,"totalResults":1,"format":"NVD_CVE","version":"2.0","timestamp":"2026-05-05T06:51:02.143","vulnerabilities":[{"cve":{"id":"CVE-2024-58057","sourceIdentifier":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","published":"2025-03-06T16:15:51.940","lastModified":"2025-10-28T02:48:14.290","vulnStatus":"Analyzed","cveTags":[],"descriptions":[{"lang":"en","value":"In the Linux kernel, the following vulnerability has been resolved:\n\nidpf: convert workqueues to unbound\n\nWhen a workqueue is created with `WQ_UNBOUND`, its work items are\nserved by special worker-pools, whose host workers are not bound to\nany specific CPU. In the default configuration (i.e. when\n`queue_delayed_work` and friends do not specify which CPU to run the\nwork item on), `WQ_UNBOUND` allows the work item to be executed on any\nCPU in the same node of the CPU it was enqueued on. While this\nsolution potentially sacrifices locality, it avoids contention with\nother processes that might dominate the CPU time of the processor the\nwork item was scheduled on.\n\nThis is not just a theoretical problem: in a particular scenario\nmisconfigured process was hogging most of the time from CPU0, leaving\nless than 0.5% of its CPU time to the kworker. The IDPF workqueues\nthat were using the kworker on CPU0 suffered large completion delays\nas a result, causing performance degradation, timeouts and eventual\nsystem crash.\n\n\n* I have also run a manual test to gauge the performance\n  improvement. The test consists of an antagonist process\n  (`./stress --cpu 2`) consuming as much of CPU 0 as possible. This\n  process is run under `taskset 01` to bind it to CPU0, and its\n  priority is changed with `chrt -pQ 9900 10000 ${pid}` and\n  `renice -n -20 ${pid}` after start.\n\n  Then, the IDPF driver is forced to prefer CPU0 by editing all calls\n  to `queue_delayed_work`, `mod_delayed_work`, etc... to use CPU 0.\n\n  Finally, `ktraces` for the workqueue events are collected.\n\n  Without the current patch, the antagonist process can force\n  arbitrary delays between `workqueue_queue_work` and\n  `workqueue_execute_start`, that in my tests were as high as\n  `30ms`. With the current patch applied, the workqueue can be\n  migrated to another unloaded CPU in the same node, and, keeping\n  everything else equal, the maximum delay I could see was `6us`."},{"lang":"es","value":"En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: idpf: convertir colas de trabajo en no vinculadas Cuando se crea una cola de trabajo con `WQ_UNBOUND`, sus elementos de trabajo son atendidos por grupos de trabajadores especiales, cuyos trabajadores host no están vinculados a ninguna CPU específica. En la configuración predeterminada (es decir, cuando `queue_delayed_work` y amigos no especifican en qué CPU ejecutar el elemento de trabajo), `WQ_UNBOUND` permite que el elemento de trabajo se ejecute en cualquier CPU en el mismo nodo de la CPU en la que se puso en cola. Si bien esta solución potencialmente sacrifica la localidad, evita la contención con otros procesos que podrían dominar el tiempo de CPU del procesador en el que se programó el elemento de trabajo. Este no es solo un problema teórico: en un escenario particular, el proceso mal configurado acaparaba la mayor parte del tiempo de la CPU0, dejando menos del 0,5% de su tiempo de CPU al kworker. Las colas de trabajo IDPF que estaban usando el kworker en CPU0 sufrieron grandes retrasos en la finalización como resultado, causando degradación del rendimiento, tiempos de espera y eventualmente falla del sistema. * También he ejecutado una prueba manual para medir la mejora del rendimiento. La prueba consiste en un proceso antagonista (`./stress --cpu 2`) que consume la mayor cantidad posible de CPU 0. Este proceso se ejecuta bajo `taskset 01` para vincularlo a CPU0, y su prioridad se cambia con `chrt -pQ 9900 10000 ${pid}` y `renice -n -20 ${pid}` después del inicio. Luego, el controlador IDPF se ve obligado a preferir CPU0 editando todas las llamadas a `queue_delayed_work`, `mod_delayed_work`, etc... para usar CPU 0. Finalmente, se recopilan `ktraces` para los eventos de la cola de trabajo. Sin el parche actual, el proceso antagonista puede forzar demoras arbitrarias entre `workqueue_queue_work` y `workqueue_execute_start`, que en mis pruebas fueron tan altas como `30ms`. Con el parche actual aplicado, la cola de trabajo se puede migrar a otra CPU sin carga en el mismo nodo y, manteniendo todo lo demás igual, la demora máxima que pude ver fue `6us`."}],"metrics":{"cvssMetricV31":[{"source":"nvd@nist.gov","type":"Primary","cvssData":{"version":"3.1","vectorString":"CVSS:3.1/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H","baseScore":5.5,"baseSeverity":"MEDIUM","attackVector":"LOCAL","attackComplexity":"LOW","privilegesRequired":"LOW","userInteraction":"NONE","scope":"UNCHANGED","confidentialityImpact":"NONE","integrityImpact":"NONE","availabilityImpact":"HIGH"},"exploitabilityScore":1.8,"impactScore":3.6}]},"weaknesses":[{"source":"nvd@nist.gov","type":"Primary","description":[{"lang":"en","value":"NVD-CWE-noinfo"}]}],"configurations":[{"nodes":[{"operator":"OR","negate":false,"cpeMatch":[{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*","versionStartIncluding":"6.7","versionEndExcluding":"6.12.13","matchCriteriaId":"2897389C-A8C3-4D69-90F2-E701B3D66373"},{"vulnerable":true,"criteria":"cpe:2.3:o:linux:linux_kernel:*:*:*:*:*:*:*:*","versionStartIncluding":"6.13","versionEndExcluding":"6.13.2","matchCriteriaId":"6D4116B1-1BFD-4F23-BA84-169CC05FC5A3"}]}]}],"references":[{"url":"https://git.kernel.org/stable/c/66bf9b3d9e1658333741f075320dc8e7cd6f8d09","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","tags":["Patch"]},{"url":"https://git.kernel.org/stable/c/868202ec3854e13de1164e4a3e25521194c5af72","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","tags":["Patch"]},{"url":"https://git.kernel.org/stable/c/9a5b021cb8186f1854bac2812bd4f396bb1e881c","source":"416baaa9-dc9f-4396-8d5f-8c081fb06d67","tags":["Patch"]}]}}]}