ترغب بنشر مسار تعليمي؟ اضغط هنا

In physical design, human designers typically place macros via trial and error, which is a Markov decision process. Reinforcement learning (RL) methods have demonstrated superhuman performance on the macro placement. In this paper, we propose an exte nsion to this prior work (Mirhoseini et al., 2020). We first describe the details of the policy and value network architecture. We replace the force-directed method with DREAMPlace for placing standard cells in the RL environment. We also compare our improved method with other academic placers on public benchmarks.
In this paper, utilizing Wangs Harnack inequality with power and the Banach fixed point theorem, the weak well-posedness for distribution dependent SDEs with integrable drift is investigated. In addition, using a trick of decoupled method, some regul arity such as relative entropy and Sobolevs estimate of invariant probability measure are proved. Furthermore, by comparing two stationary Fokker-Planck-Kolmogorov equations, the existence and uniqueness of invariant probability measure for McKean-Vlasov SDEs are obtained by log-Sobolevs inequality and Banachs fixed theorem. Finally, some examples are presented.
178 - Xuming He , Jingshen Wang 2021
Twenty years ago Breiman (2001) called to our attention a significant cultural division in modeling and data analysis between the stochastic data models and the algorithmic models. Out of his deep concern that the statistical community was so deeply and almost exclusively committed to the former, Breiman warned that we were losing our abilities to solve many real-world problems. Breiman was not the first, and certainly not the only statistician, to sound the alarm; we may refer to none other than John Tukey who wrote almost 60 years ago data analysis is intrinsically an empirical science. However, the bluntness and timeliness of Breimans article made it uniquely influential. It prepared us for the data science era and encouraged a new generation of statisticians to embrace a more broadly defined discipline. Some might argue that The cultural division between these two statistical learning frameworks has been growing at a steady pace in recent years, to quote Mukhopadhyay and Wang (2020). In this commentary, we focus on some of the positive changes over the past 20 years and offer an optimistic outlook for our profession.
Understanding treatment effect heterogeneity in observational studies is of great practical importance to many scientific fields because the same treatment may affect different individuals differently. Quantile regression provides a natural framework for modeling such heterogeneity. In this paper, we propose a new method for inference on heterogeneous quantile treatment effects that incorporates high-dimensional covariates. Our estimator combines a debiased $ell_1$-penalized regression adjustment with a quantile-specific covariate balancing scheme. We present a comprehensive study of the theoretical properties of this estimator, including weak convergence of the heterogeneous quantile treatment effect process to the sum of two independent, centered Gaussian processes. We illustrate the finite-sample performance of our approach through Monte Carlo experiments and an empirical example, dealing with the differential effect of mothers education on infant birth weights.
Paragraphs are an important class of document entities. We propose a new approach for paragraph identification by spatial graph convolutional neural networks (GCN) applied on OCR text boxes. Two steps, namely line splitting and line clustering, are p erformed to extract paragraphs from the lines in OCR results. Each step uses a beta-skeleton graph constructed from bounding boxes, where the graph edges provide efficient support for graph convolution operations. With only pure layout input features, the GCN model size is 3~4 orders of magnitude smaller compared to R-CNN based models, while achieving comparable or better accuracies on PubLayNet and other datasets. Furthermore, the GCN models show good generalization from synthetic training data to real-world images, and good adaptivity for variable document styles.
319 - Xinan Wang , Yishen Wang , Di Shi 2020
Power transfer limits or transfer capability (TC) directly relate to the system operation and control as well as electricity markets. As a consequence, their assessment has to comply with static constraints, such as line thermal limits, and dynamic c onstraints, such as transient stability limits, voltage stability limits and small-signal stability limits. Since the load dynamics have substantial impacts on power system transient stability, load models are one critical factor that affects the power transfer limits. Currently, multiple load models have been proposed and adopted in the industry and academia, including the ZIP model, ZIP plus induction motor composite model (ZIP + IM) and WECC composite load model (WECC CLM). Each of them has its unique advantages, but their impacts on the power transfer limits are not yet adequately addressed. One existing challenge is fitting the high-order nonlinear models such as WECC CLM. In this study, we innovatively adopt double deep Q-learning Network (DDQN) agent as a general load modeling tool in the dynamic assessment procedure and fit the same transient field measurements into different load models. A comprehensive evaluation is then conducted to quantify the load models impacts on the power transfer limits. The simulation environment is the IEEE-39 bus system constructed in Transient Security Assessment Tool (TSAT).
We investigated the evolution of ferromagnetism in layered Fe$_3$GeTe$_2$ flakes under different pressures and temperatures using in situ magnetic circular dichroism (MCD) spectroscopy. We found that the rectangle shape of hysteretic loop under an ou t-of-plane magnetic field sweep can sustain below 7 GPa. Above that pressure, an intermediate state appears at low temperature region signaled by an 8-shaped skew hysteretic loop. Meanwhile, the coercive field and Curie temperature decrease with increasing pressures, implying the decrease of the exchange interaction and the magneto-crystalline anisotropy under pressures. The intermediate phase has a labyrinthine domain structure, which is attributed to the increase of ratio of exchange interaction to magneto-crystalline anisotropy based on Jaglas theory. Moreover, our calculation results reveal a weak structural transition around 6 GPa, which leads to a drop of the magnetic momentum of Fe ions.
The midline related pathological image features are crucial for evaluating the severity of brain compression caused by stroke or traumatic brain injury (TBI). The automated midline delineation not only improves the assessment and clinical decision ma king for patients with stroke symptoms or head trauma but also reduces the time of diagnosis. Nevertheless, most of the previous methods model the midline by localizing the anatomical points, which are hard to detect or even missing in severe cases. In this paper, we formulate the brain midline delineation as a segmentation task and propose a three-stage framework. The proposed framework firstly aligns an input CT image into the standard space. Then, the aligned image is processed by a midline detection network (MD-Net) integrated with the CoordConv Layer and Cascade AtrousCconv Module to obtain the probability map. Finally, we formulate the optimal midline selection as a pathfinding problem to solve the problem of the discontinuity of midline delineation. Experimental results show that our proposed framework can achieve superior performance on one in-house dataset and one public dataset.
Growing model complexities in load modeling have created high dimensionality in parameter estimations, and thereby substantially increasing associated computational costs. In this paper, a tensor-based method is proposed for identifying composite loa d modeling (CLM) parameters and for conducting a global sensitivity analysis. Tensor format and Fokker-Planck equations are used to estimate the power output response of CLM in the context of simultaneously varying parameters under their full parameter distribution ranges. The proposed tensor structured is shown as effective for tackling high-dimensional parameter estimation and for improving computational performances in load modeling through global sensitivity analysis.
118 - Xinan Wang , Yishen Wang , Di Shi 2019
With the increasing complexity of modern power systems, conventional dynamic load modeling with ZIP and induction motors (ZIP + IM) is no longer adequate to address the current load characteristic transitions. In recent years, the WECC composite load model (WECC CLM) has shown to effectively capture the dynamic load responses over traditional load models in various stability studies and contingency analyses. However, a detailed WECC CLM model typically has a high degree of complexity, with over one hundred parameters, and no systematic approach to identifying and calibrating these parameters. Enabled by the wide deployment of PMUs and advanced deep learning algorithms, proposed here is a double deep Q-learning network (DDQN)-based, two-stage load modeling framework for the WECC CLM. This two-stage method decomposes the complicated WECC CLM for more efficient identification and does not require explicit model details. In the first stage, the DDQN agent determines an accurate load composition. In the second stage, the parameters of the WECC CLM are selected from a group of Monte-Carlo simulations. The set of selected load parameters is expected to best approximate the true transient responses. The proposed framework is verified using an IEEE 39-bus test system on commercial simulation platforms.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا