论文标题

具有双波段光学启发的神经网络的通用光刻建模

Generic Lithography Modeling with Dual-band Optics-Inspired Neural Networks

论文作者

Yang, Haoyu, Li, Zongyi, Sastry, Kumara, Mukhopadhyay, Saumyadip, Kilgard, Mark, Anandkumar, Anima, Khailany, Brucek, Singh, Vivek, Ren, Haoxing

论文摘要

光刻模拟是VLSI设计的关键步骤,并优化了制造性。使用严格模型的高度精确光刻模拟的现有解决方案在计算上又昂贵且缓慢,即使配备了各种近似技术。最近,机器学习为光刻仿真任务提供了替代解决方案,例如粗粒边缘放置误差回归和完整的轮廓预测。但是,由于限制性使用方案或较低的仿真精度,这些基于学习的方法的影响受到限制。为了解决这些问题,我们引入了双波段光学启发的神经网络设计,该设计考虑了光刻的光学物理学。据我们所知,我们的方法在1nm^2/像素分辨率的情况下通过/金属层轮廓模拟产生了第一个以任何瓷砖大小的速度发布。与以前的基于机器学习的解决方案相比,我们证明了我们的框架可以更快地训练,并可以显着提高效率和图像质量,型号较小20倍。我们还以1%的准确性损失,在传统光刻模拟器上实现了85倍的仿真速度。

Lithography simulation is a critical step in VLSI design and optimization for manufacturability. Existing solutions for highly accurate lithography simulation with rigorous models are computationally expensive and slow, even when equipped with various approximation techniques. Recently, machine learning has provided alternative solutions for lithography simulation tasks such as coarse-grained edge placement error regression and complete contour prediction. However, the impact of these learning-based methods has been limited due to restrictive usage scenarios or low simulation accuracy. To tackle these concerns, we introduce an dual-band optics-inspired neural network design that considers the optical physics underlying lithography. To the best of our knowledge, our approach yields the first published via/metal layer contour simulation at 1nm^2/pixel resolution with any tile size. Compared to previous machine learning based solutions, we demonstrate that our framework can be trained much faster and offers a significant improvement on efficiency and image quality with 20X smaller model size. We also achieve 85X simulation speedup over traditional lithography simulator with 1% accuracy loss.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源