In this paper, we propose a machine-learning assisted modeling framework in design-technology co-optimization (DTCO) flow. Neural network (NN) based surrogate model is used as an alternative of compact model of new devices without prior knowledge of device physics to predict device and circuit electrical characteristics. This modeling framework is demonstrated and verified in FinFET with high predicted accuracy in device and circuit level. Details about the data handling and prediction results are discussed. Moreover, same framework is applied to new mechanism device tunnel FET (TFET) to predict device and circuit characteristics. This work provides new modeling method for DTCO flow.
With the emergence of new photonic and plasmonic materials with optimized properties as well as advanced nanofabrication techniques, nanophotonic devices are now capable of providing solutions to global challenges in energy conversion, information technologies, chemical/biological sensing, space exploration, quantum computing, and secure communication. Addressing grand challenges poses inherently complex, multi-disciplinary problems with a manifold of stringent constraints in conjunction with the required systems performance. Conventional optimization techniques have long been utilized as powerful tools to address multi-constrained design tasks. One example is so-called topology optimization that has emerged as a highly successful architect for the advanced design of non-intuitive photonic structures. Despite many advantages, this technique requires substantial computational resources and thus has very limited applicability to highly constrained optimization problems within high-dimensions parametric space. In our approach, we merge the topology optimization method with machine learning algorithms such as adversarial autoencoders and show substantial improvement of the optimization process by providing unparalleled control of the compact design space representations. By enabling efficient, global optimization searches within complex landscapes, the proposed compact hyperparametric representations could become crucial for multi-constrained problems. The proposed approach could enable a much broader scope of the optimal designs and data-driven materials synthesis that goes beyond photonic and optoelectronic applications.
Over the past decade, artificially engineered optical materials and nanostructured thin films have revolutionized the area of photonics by employing novel concepts of metamaterials and metasurfaces where spatially varying structures yield tailorable, by design effective electromagnetic properties. The current state-of-the-art approach to designing and optimizing such structures relies heavily on simplistic, intuitive shapes for their unit cells or meta-atoms. Such approach can not provide the global solution to a complex optimization problem where both meta-atoms shape, in-plane geometry, out-of-plane architecture, and constituent materials have to be properly chosen to yield the maximum performance. In this work, we present a novel machine-learning-assisted global optimization framework for photonic meta-devices design. We demonstrate that using an adversarial autoencoder coupled with a metaheuristic optimization framework significantly enhances the optimization search efficiency of the meta-devices configurations with complex topologies. We showcase the concept of physics-driven compressed design space engineering that introduces advanced regularization into the compressed space of adversarial autoencoder based on the optical responses of the devices. Beyond the significant advancement of the global optimization schemes, our approach can assist in gaining comprehensive design intuition by revealing the underlying physics of the optical performance of meta-devices with complex topologies and material compositions.
Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parall
Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave ML experts repeatedly solving the same design challenges. By targeting common patterns in ML, we developed GraphLab, which improves upon abstractions like MapReduce by compactly expressing asynchronous iterative algorithms with sparse computational dependencies while ensuring data consistency and achieving a high degree of parallel performance. We demonstrate the expressiveness of the GraphLab framework by designing and implementing parall
Quantum annealing devices such as the ones produced by D-Wave systems are typically used for solving optimization and sampling tasks, and in both academia and industry the characterization of their usefulness is subject to active research. Any problem that can naturally be described as a weighted, undirected graph may be a particularly interesting candidate, since such a problem may be formulated a as quadratic unconstrained binary optimization (QUBO) instance, which is solvable on D-Waves Chimera graph architecture. In this paper, we introduce a quantum-assisted finite-element method for design optimization. We show that we can minimize a shape-specific quantity, in our case a ray approximation of sound pressure at a specific position around an object, by manipulating the shape of this object. Our algorithm belongs to the class of quantum-assisted algorithms, as the optimization task runs iteratively on a D-Wave 2000Q quantum processing unit (QPU), whereby the evaluation and interpretation of the results happens classically. Our first and foremost aim is to explain how to represent and solve parts of these problems with the help of a QPU, and not to prove supremacy over existing classical finite-element algorithms for design optimization.