ترغب بنشر مسار تعليمي؟ اضغط هنا

We model investor heterogeneity using different required returns on an investment and evaluate the impact on the valuation of an investment. By assuming no disagreement on the cash flows, we emphasize how risk preferences in particular, but also the costs of capital, influence a subjective evaluation of the decision to invest now or retain the option to invest in future. We propose a risk-adjusted valuation model to facilitate investors subjective decision making, in response to the market valuation of an investment opportunity. The investors subjective assessment arises from their perceived misvaluation of the investment by the market, so projected cash flows are discounted using two different rates representing the investors and the markets view. This liberates our model from perfect or imperfect hedging assumptions and instead, we are able to illustrate the hedging effect on the real option value when perceptions of risk premia diverge. During crises periods, delaying an investment becomes more valuable as the idiosyncratic risk of future cash flows increases, but the decision-maker may rush to invest too quickly when the risk level is exceptionally high. Our model verifies features established by classical real-option valuation models and provides many new insights about the importance of modelling divergences in decision-makers risk premia, especially during crisis periods. It also has many practical advantages because it requires no more parameter inputs than basic discounted cash flow approaches, such as the marketed asset disclaimer method, but the outputs are much richer. They allow for complex interactions between cost and revenue uncertainties as well as an easy exploration of the effects of hedgeable and un-hedgeable risks on the real option value. Furthermore, we provide fully-adjustable Python code in which all parameter values can be chosen by the user.
242 - Jiaxi Cheng , Zhenhao Cen , 2021
Plasmon-induced transparency (PIT) displays complex nonlinear dynamics that find critical phenomena in areas such as nonlinear waves. However, such a nonlinear solution depends sensitively on the selection of parameters and different potentials in th e Schrodinger equation. Despite this complexity, the machine learning community has developed remarkable efficiencies in predicting complicated datasets by regression. Here, we consider a recurrent neural network (RNN) approach to predict the complex propagation of nonlinear solitons in plasmon-induced transparency metamaterial systems with applied potentials bypassing the need for analytical and numerical approaches of a guiding model. We demonstrate the success of this scheme on the prediction of the propagation of the nonlinear solitons solely from a given initial condition and potential. We prove the prominent agreement of results in simulation and prediction by long short-term memory (LSTM) artificial neural networks. The framework presented in this work opens up a new perspective for the application of RNN in quantum systems and nonlinear waves using Schrodinger-type equations, for example, the nonlinear dynamics in cold-atom systems and nonlinear fiber optics.
In todays networked society, many real-world problems can be formalized as predicting links in networks, such as Facebook friendship suggestions, e-commerce recommendations, and the prediction of scientific collaborations in citation networks. Increa singly often, link prediction problem is tackled by means of network embedding methods, owing to their state-of-the-art performance. However, these methods lack transparency when compared to simpler baselines, and as a result their robustness against adversarial attacks is a possible point of concern: could one or a few small adversarial modifications to the network have a large impact on the link prediction performance when using a network embedding model? Prior research has already investigated adversarial robustness for network embedding models, focused on classification at the node and graph level. Robustness with respect to the link prediction downstream task, on the other hand, has been explored much less. This paper contributes to filling this gap, by studying adversarial robustness of Conditional Network Embedding (CNE), a state-of-the-art probabilistic network embedding model, for link prediction. More specifically, given CNE and a network, we measure the sensitivity of the link predictions of the model to small adversarial perturbations of the network, namely changes of the link status of a node pair. Thus, our approach allows one to identify the links and non-links in the network that are most vulnerable to such perturbations, for further investigation by an analyst. We analyze the characteristics of the most and least sensitive perturbations, and empirically confirm that our approach not only succeeds in identifying the most vulnerable links and non-links, but also that it does so in a time-efficient manner thanks to an effective approximation.
In this paper, we present Generic Object Detection (GenOD), one of the largest object detection systems deployed to a web-scale general visual search engine that can detect over 900 categories for all Microsoft Bing Visual Search queries in near real -time. It acts as a fundamental visual query understanding service that provides object-centric information and shows gains in multiple production scenarios, improving upon domain-specific models. We discuss the challenges of collecting data, training, deploying and updating such a large-scale object detection model with multiple dependencies. We discuss a data collection pipeline that reduces per-bounding box labeling cost by 81.5% and latency by 61.2% while improving on annotation quality. We show that GenOD can improve weighted average precision by over 20% compared to multiple domain-specific models. We also improve the model update agility by nearly 2 times with the proposed disjoint detector training compared to joint fine-tuning. Finally we demonstrate how GenOD benefits visual search applications by significantly improving object-level search relevance by 54.9% and user engagement by 59.9%.
Plasmon-induced transparency (PIT) in advanced materials has attracted extensive attention for both theoretical and applied physics. Here, we considered a scheme that can produce PIT and studied the characteristics of ultraslow low-power magnetic sol itons. The PIT metamaterial is constructed as an array of unit cells that consist of two coupled varactor-loaded split-ring resonators. Simulations verified that ultraslow magnetic solitons can be generated in this type of metamaterial. To solve nonlinear equations, various types of numerical methods can be applied by virtue of exact solutions, which are always difficult to acquire. However, the initial conditions and propagation distance impact the ultimate results. In this article, an artificial neural network (ANN) was used as a supervised learning model to predict the evolution and final mathematical expressions through training based on samples with disparate initial conditions. Specifically, the influences of the number of hidden layers were discussed. Additionally, the learning results obtained by employing several training algorithms were analyzed and compared. Our research opens a route for employing machine learning algorithms to save time in both physical and engineering applications of Schrodinger-type systems.
Recent years have witnessed unprecedented success achieved by deep learning models in the field of computer vision. However, their vulnerability towards carefully crafted adversarial examples has also attracted the increasing attention of researchers . Motivated by the observation that adversarial examples are due to the non-robust feature learned from the original dataset by models, we propose the concepts of salient feature(SF) and trivial feature(TF). The former represents the class-related feature, while the latter is usually adopted to mislead the model. We extract these two features with coupled generative adversarial network model and put forward a novel detection and defense method named salient feature extractor (SFE) to defend against adversarial attacks. Concretely, detection is realized by separating and comparing the difference between SF and TF of the input. At the same time, correct labels are obtained by re-identifying SF to reach the purpose of defense. Extensive experiments are carried out on MNIST, CIFAR-10, and ImageNet datasets where SFE shows state-of-the-art results in effectiveness and efficiency compared with baselines. Furthermore, we provide an interpretable understanding of the defense and detection process.
This paper presents a low-communication-overhead parallel method for solving the 3D incompressible Navier-Stokes equations. A fully-explicit projection method with second-order space-time accuracy is adopted. Combined with fast Fourier transforms, th e parallel diagonal dominant (PDD) algorithm for the tridiagonal system is employed to solve the pressure Poisson equation, differing from its recent applications to compact scheme derivatives computation (Abide et al. 2017) and alternating-direction-implicit method (Moon et al. 2020). The number of all-to-all communications is decreased to only two, in a 2D pencil-like domain decomposition. The resulting MPI/OpenMP hybrid parallel code shows excellent strong scalability up to $10^4$ cores and small wall-clock time per timestep. Numerical simulations of turbulent channel flow at different friction Reynolds numbers ($Re_{tau}$ = 550, 1000, 2000) have been conducted and the statistics are in good agreement with the reference data. The proposed method allows massively simulation of wall turbulence at high Reynolds numbers as well as many other incompressible flows.
We derive the Thouless-Anderson-Palmer (TAP) equations for the Ghatak and Sherrington model. Our derivation, based on the cavity method, holds at high temperature and at all values of the crystal field. It confirms the prediction of Yokota.
Recent studies on deep-learning-based small defection segmentation approaches are trained in specific settings and tend to be limited by fixed context. Throughout the training, the network inevitably learns the representation of the background of the training data before figuring out the defection. They underperform in the inference stage once the context changed and can only be solved by training in every new setting. This eventually leads to the limitation in practical robotic applications where contexts keep varying. To cope with this, instead of training a network context by context and hoping it to generalize, why not stop misleading it with any limited context and start training it with pure simulation? In this paper, we propose the network SSDS that learns a way of distinguishing small defections between two images regardless of the context, so that the network can be trained once for all. A small defection detection layer utilizing the pose sensitivity of phase correlation between images is introduced and is followed by an outlier masking layer. The network is trained on randomly generated simulated data with simple shapes and is generalized across the real world. Finally, SSDS is validated on real-world collected data and demonstrates the ability that even when trained in cheap simulation, SSDS can still find small defections in the real world showing the effectiveness and its potential for practical applications.
Social media became popular and percolated almost all aspects of our daily lives. While online posting proves very convenient for individual users, it also fosters fast-spreading of various rumors. The rapid and wide percolation of rumors can cause p ersistent adverse or detrimental impacts. Therefore, researchers invest great efforts on reducing the negative impacts of rumors. Towards this end, the rumor classification system aims to detect, track, and verify rumors in social media. Such systems typically include four components: (i) a rumor detector, (ii) a rumor tracker, (iii) a stance classifier, and (iv) a veracity classifier. In order to improve the state-of-the-art in rumor detection, tracking, and verification, we propose VRoC, a tweet-level variational autoencoder-based rumor classification system. VRoC consists of a co-train engine that trains variational autoencoders (VAEs) and rumor classification components. The co-train engine helps the VAEs to tune their latent representations to be classifier-friendly. We also show that VRoC is able to classify unseen rumors with high levels of accuracy. For the PHEME dataset, VRoC consistently outperforms several state-of-the-art techniques, on both observed and unobserved rumors, by up to 26.9%, in terms of macro-F1 scores.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا