{"batchcomplete":"","continue":{"lecontinue":"20251017094222|587","continue":"-||"},"query":{"logevents":[{"logid":597,"ns":2,"title":"User:Cjf ldk","pageid":0,"logpage":0,"params":{"userid":24},"type":"newusers","action":"create","user":"Cjf ldk","timestamp":"2026-04-20T03:14:21Z","comment":""},{"logid":596,"ns":0,"title":"Infineon Aurix TC4x","pageid":0,"logpage":384,"params":{},"type":"delete","action":"delete","user":"Timo.stripf","timestamp":"2026-02-09T12:03:39Z","comment":"content was: \"#REDIRECT [[Infineon AURIX TC4x]]\", and the only contributor was \"[[Special:Contributions/Timo.stripf|Timo.stripf]]\" ([[User talk:Timo.stripf|talk]])"},{"logid":595,"ns":0,"title":"Tricore Instruction Set Architecture","pageid":0,"logpage":409,"params":{},"type":"delete","action":"delete","user":"Timo.stripf","timestamp":"2026-02-09T12:03:06Z","comment":"content was: \"#REDIRECT [[TriCore Instruction Set Architecture]]\", and the only contributor was \"[[Special:Contributions/Timo.stripf|Timo.stripf]]\" ([[User talk:Timo.stripf|talk]])"},{"logid":594,"ns":0,"title":"Tricore TC1.6.2 Instruction Set Architecture","pageid":0,"logpage":405,"params":{},"type":"delete","action":"delete","user":"Timo.stripf","timestamp":"2026-02-09T12:02:45Z","comment":"content was: \"#REDIRECT [[Tricore Instruction Set Architecture]]\", and the only contributor was \"[[Special:Contributions/Timo.stripf|Timo.stripf]]\" ([[User talk:Timo.stripf|talk]])"},{"logid":593,"ns":0,"title":"emmtrix ONNX-to-C Code Generator","pageid":475,"logpage":475,"params":{},"type":"create","action":"create","user":"Timo.stripf","timestamp":"2026-02-04T01:17:50Z","comment":"Created page with \"'''emmtrix ONNX-to-C Code Generator (emx-onnx-cgen)''' is an emmtrix-developed '''AI frontend compiler''' that translates ONNX models into '''deterministic, analyzable C code''' specifically designed for '''auto-vectorization and embedded target optimization'''.  The primary goal of emx-onnx-cgen is not to perform aggressive hardware-specific optimizations itself, but to generate '''high-quality C code''' that serves as an ideal input for the emmtrix '''Vectorizer''' and...\""},{"logid":592,"ns":14,"title":"Category:Math Function Accuracy","pageid":0,"logpage":425,"params":{"target_ns":14,"target_title":"Category:Numerical Precision","suppressredirect":""},"type":"move","action":"move","user":"Timo.stripf","timestamp":"2026-02-03T09:50:52Z","comment":""},{"logid":591,"ns":0,"title":"Numerical Precision in ONNX and AI Inference","pageid":474,"logpage":474,"params":{},"type":"create","action":"create","user":"Timo.stripf","timestamp":"2026-01-30T14:28:20Z","comment":"Created page with \"= Numerical Precision in ONNX and AI Inference =  == Introduction ==  '''Open Neural Network Exchange (ONNX)''' is an open standard format for representing machine learning models and neural network computations across different frameworks and hardware<sup>[[1]](#ref-1)</sup>. As models are exported and deployed via ONNX, the '''numerical precision''' of computations becomes critical. Deep learning inference involves a variety of floating-point operations, and small nume...\""},{"logid":590,"ns":2,"title":"User:Timos","pageid":0,"logpage":0,"params":{"userid":23},"type":"newusers","action":"create","user":"Timos","timestamp":"2025-11-28T08:04:39Z","comment":""},{"logid":589,"ns":2,"title":"User:Modern-Three-Wheel-Scooter4053","pageid":0,"logpage":0,"params":{"userid":22},"type":"newusers","action":"create","user":"Modern-Three-Wheel-Scooter4053","timestamp":"2025-11-21T17:59:54Z","comment":""},{"logid":588,"ns":2,"title":"User:Automatic-Espresso-Machine-UK4543","pageid":0,"logpage":0,"params":{"userid":21},"type":"newusers","action":"create","user":"Automatic-Espresso-Machine-UK4543","timestamp":"2025-10-28T16:24:38Z","comment":""}]}}