No Arabic abstract
Metamaterials are emerging as a new paradigmatic material system to render unprecedented and tailorable properties for a wide variety of engineering applications. However, the inverse design of metamaterial and its multiscale system is challenging due to high-dimensional topological design space, multiple local optima, and high computational cost. To address these hurdles, we propose a novel data-driven metamaterial design framework based on deep generative modeling. A variational autoencoder (VAE) and a regressor for property prediction are simultaneously trained on a large metamaterial database to map complex microstructures into a low-dimensional, continuous, and organized latent space. We show in this study that the latent space of VAE provides a distance metric to measure shape similarity, enable interpolation between microstructures and encode meaningful patterns of variation in geometries and properties. Based on these insights, systematic data-driven methods are proposed for the design of microstructure, graded family, and multiscale system. For microstructure design, the tuning of mechanical properties and complex manipulations of microstructures are easily achieved by simple vector operations in the latent space. The vector operation is further extended to generate metamaterial families with a controlled gradation of mechanical properties by searching on a constructed graph model. For multiscale metamaterial systems design, a diverse set of microstructures can be rapidly generated using VAE for target properties at different locations and then assembled by an efficient graph-based optimization method to ensure compatibility between adjacent microstructures. We demonstrate our framework by designing both functionally graded and heterogeneous metamaterial systems that achieve desired distortion behaviors.
In this work, we develop DeepWiPHY, a deep learning-based architecture to replace the channel estimation, common phase error (CPE) correction, sampling rate offset (SRO) correction, and equalization modules of IEEE 802.11ax based orthogonal frequency division multiplexing (OFDM) receivers. We first train DeepWiPHY with a synthetic dataset, which is generated using representative indoor channel models and includes typical radio frequency (RF) impairments that are the source of nonlinearity in wireless systems. To further train and evaluate DeepWiPHY with real-world data, we develop a passive sniffing-based data collection testbed composed of Universal Software Radio Peripherals (USRPs) and commercially available IEEE 802.11ax products. The comprehensive evaluation of DeepWiPHY with synthetic and real-world datasets (110 million synthetic OFDM symbols and 14 million real-world OFDM symbols) confirms that, even without fine-tuning the neural networks architecture parameters, DeepWiPHY achieves comparable performance to or outperforms the conventional WLAN receivers, in terms of both bit error rate (BER) and packet error rate (PER), under a wide range of channel models, signal-to-noise (SNR) levels, and modulation schemes.
Locally resonant elastic metamaterials (LREM) can be designed, by optimizing the geometry of the constituent self-repeating unit cells, to potentially damp out vibration in selected frequency ranges, thus yielding desired bandgaps. However, it remains challenging to quickly arrive at unit cell designs that satisfy any requested bandgap specifications within a given global frequency range. This paper develops a computationally efficient framework for (fast) inverse design of LREM, by integrating a new type of machine learning models called invertible neural networks or INN. An INN can be trained to predict the bandgap bounds as a function of the unit cell design, and interestingly at the same time it learns to predict the unit cell design given a bandgap, when executed in reverse. In our case the unit cells are represented in terms of the widths of the outer matrix and middle soft filler layer of the unit cell. Training data on the frequency response of the unit cell is provided by Bloch dispersion analyses. The trained INN is used to instantaneously retrieve feasible (or near feasible) inverse designs given a specified bandgap constraint, which is then used to initialize a forward constrained optimization (based on sequential quadratic programming) to find the bandgap satisfying unit cell with minimum mass. Case studies show favorable performance of this approach, in terms of the bandgap characteristics and minimized mass, when compared to the median scenario over ten randomly initialized optimizations for the same specified bandgaps. Further analysis using FEA verify the bandgap performance of a finite structure comprised of $8times 8$ arrangement of the unit cells obtained with INN-accelerated inverse design.
GPU (graphics processing unit) has been used for many data-intensive applications. Among them, deep learning systems are one of the most important consumer systems for GPU nowadays. As deep learning applications impose deeper and larger models in order to achieve higher accuracy, memory management becomes an important research topic for deep learning systems, given that GPU has limited memory size. Many approaches have been proposed towards this issue, e.g., model compression and memory swapping. However, they either degrade the model accuracy or require a lot of manual intervention. In this paper, we propose two orthogonal approaches to reduce the memory cost from the system perspective. Our approaches are transparent to the models, and thus do not affect the model accuracy. They are achieved by exploiting the iterative nature of the training algorithm of deep learning to derive the lifetime and read/write order of all variables. With the lifetime semantics, we are able to implement a memory pool with minimal fragments. However, the optimization problem is NP-complete. We propose a heuristic algorithm that reduces up to 13.3% of memory compared with Nvidias default memory pool with equal time complexity. With the read/write semantics, the variables that are not in use can be swapped out from GPU to CPU to reduce the memory footprint. We propose multiple swapping strategies to automatically decide which variable to swap and when to swap out (in), which reduces the memory cost by up to 34.2% without communication overhead.
Aperiodic metamaterials represent a class of structural systems that are composed of different building blocks (cells), instead of a self-repeating chain of the same unit cells. Optimizing aperiodic cellular structural systems thus presents high-dimensional problems that are challenging to solve using purely high-fidelity structural optimization approaches. Specialized analytical modeling along with metamodel based optimization can provide a more tractable alternative solution approach. To this end, this paper presents a design automation framework applied to a 1D metamaterial system, namely a drill string, where vibration suppression is of utmost importance. The drill string comprises a set of nonuniform rings attached to the outer surface of a longitudinal rod. As such, the resultant system can now be perceived as an aperiodic 1D metamaterial with each ring/gap representing a cell. Despite being a 1D system, the simultaneous consideration of multiple DoF (i.e., torsional, axial, and lateral motions) poses significant computational challenges. Therefore, a transfer matrix method (TMM) is employed to analytically determine the frequency response of the drill string. A suite of neural networks (ANN) is trained on TMM samples (which present minute-scale computing costs per evaluation), to model the frequency response. ANN-based optimization is then performed to minimize mass subject to constraints on the gap between consecutive resonance peaks in one case, and minimizing this gap in the second case, leading to crucial improvements over baselines. Further novel contribution occurs through the development of an inverse modeling approach that can instantaneously produce the 1D metamaterial design with minimum mass for a given desired non-resonant frequency range. This is accomplished by using invertible neural networks, and results show promising alignment with forward solutions.
We consider a multicast scheme recently proposed for a wireless downlink in [1]. It was shown earlier that power control can significantly improve its performance. However for this system, obtaining optimal power control is intractable because of a very large state space. Therefore in this paper we use deep reinforcement learning where we use function approximation of the Q-function via a deep neural network. We show that optimal power control can be learnt for reasonably large systems via this approach. The average power constraint is ensured via a Lagrange multiplier, which is also learnt. Finally, we demonstrate that a slight modification of the learning algorithm allows the optimal control to track the time varying system statistics.