菜单

关于 🐙 GitHub
arXiv 提交日期: 2026-02-13
📄 Abstract - Eventizing Traditionally Opaque Binary Neural Networks as 1-safe Petri net Models

Binary Neural Networks (BNNs) offer a low-complexity and energy-efficient alternative to traditional full-precision neural networks by constraining their weights and activations to binary values. However, their discrete, highly non-linear behavior makes them difficult to explain, validate and formally verify. As a result, BNNs remain largely opaque, limiting their suitability in safety-critical domains, where causal transparency and behavioral guarantees are essential. In this work, we introduce a Petri net (PN)-based framework that captures the BNN's internal operations as event-driven processes. By "eventizing" their operations, we expose their causal relationships and dependencies for a fine-grained analysis of concurrency, ordering, and state evolution. Here, we construct modular PN blueprints for core BNN components including activation, gradient computation and weight updates, and compose them into a complete system-level model. We then validate the composed PN against a reference software-based BNN, verify it against reachability and structural checks to establish 1-safeness, deadlock-freeness, mutual exclusion and correct-by-construction causal sequencing, before we assess its scalability and complexity at segment, component, and system levels using the automated measurement tools in Workcraft. Overall, this framework enables causal introspection of transparent and event-driven BNNs that are amenable to formal reasoning and verification.

顶级标签: systems theory model evaluation
详细标签: binary neural networks petri nets formal verification model interpretability concurrency analysis 或 搜索:

将传统不透明的二元神经网络事件化为1-安全Petri网模型 / Eventizing Traditionally Opaque Binary Neural Networks as 1-safe Petri net Models


1️⃣ 一句话总结

这篇论文提出了一种新方法,将难以理解的二元神经网络转换成一种名为Petri网的图形化模型,使其内部运作过程变得像事件流程图一样清晰可见,从而便于进行形式化验证,以提升其在安全关键领域应用的可靠性。

源自 arXiv: 2602.13128