ترغب بنشر مسار تعليمي؟ اضغط هنا

Technical Note: Generating Realistic Fighting Scenes by Game Tree

154   0   0.0 ( 0 )
 نشر من قبل Hubert P. H. Shum
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently, there have been a lot of researches to synthesize / edit the motion of a single avatar in the virtual environment. However, there has not been so much work of simulating continuous interactions of multiple avatars such as fighting. In this paper, we propose a new method to generate a realistic fighting scene based on motion capture data. We propose a new algorithm called the temporal expansion approach which maps the continuous time action plan to a discrete causality space such that turn-based evaluation methods can be used. As a result, it is possible to use many mature algorithms available in strategy games such as the Minimax algorithm and $alpha-beta$ pruning. We also propose a method to generate and use an offense/defense table, which illustrates the spatial-temporal relationship of attacks and dodges, to incorporate tactical maneuvers of defense into the scene. Using our method, avatars will plan their strategies taking into account the reaction of the opponent. Fighting scenes with multiple avatars are generated to demonstrate the effectiveness of our algorithm. The proposed method can also be applied to other kinds of continuous activities that require strategy planning such as sport games.



قيم البحث

اقرأ أيضاً

We consider the problem of generating realistic traffic scenes automatically. Existing methods typically insert actors into the scene according to a set of hand-crafted heuristics and are limited in their ability to model the true complexity and dive rsity of real traffic scenes, thus inducing a content gap between synthesized traffic scenes versus real ones. As a result, existing simulators lack the fidelity necessary to train and test self-driving vehicles. To address this limitation, we present SceneGen, a neural autoregressive model of traffic scenes that eschews the need for rules and heuristics. In particular, given the ego-vehicle state and a high definition map of surrounding area, SceneGen inserts actors of various classes into the scene and synthesizes their sizes, orientations, and velocities. We demonstrate on two large-scale datasets SceneGens ability to faithfully model distributions of real traffic scenes. Moreover, we show that SceneGen coupled with sensor simulation can be used to train perception models that generalize to the real world.
We present a technique for rendering highly complex 3D scenes in real-time by generating uniformly distributed points on the scenes visible surfaces. The technique is applicable to a wide range of scene types, like scenes directly based on complex an d detailed CAD data consisting of billions of polygons (in contrast to scenes handcrafted solely for visualization). This allows to visualize such scenes smoothly even in VR on a HMD with good image quality, while maintaining the necessary frame-rates. In contrast to other point based rendering methods, we place points in an approximated blue noise distribution only on visible surfaces and store them in a highly GPU efficient data structure, allowing to progressively refine the number of rendered points to maximize the image quality for a given target frame rate. Our evaluation shows that scenes consisting of a high amount of polygons can be rendered with interactive frame rates with good visual quality on standard hardware.
In this paper, we propose Text2Scene, a model that generates various forms of compositional scene representations from natural language descriptions. Unlike recent works, our method does NOT use Generative Adversarial Networks (GANs). Text2Scene inst ead learns to sequentially generate objects and their attributes (location, size, appearance, etc) at every time step by attending to different parts of the input text and the current status of the generated scene. We show that under minor modifications, the proposed framework can handle the generation of different forms of scene representations, including cartoon-like scenes, object layouts corresponding to real images, and synthetic images. Our method is not only competitive when compared with state-of-the-art GAN-based methods using automatic metrics and superior based on human judgments but also has the advantage of producing interpretable results.
This paper presents a commentator for providing real-time game commentary in a fighting game. The commentary takes into account highlight cues, obtained by analyzing scenes during gameplay, as input to adjust the pitch and loudness of commentary to b e spoken by using a Text-to-Speech (TTS) technology. We investigate different designs for pitch and loudness adjustment. The proposed AI consists of two parts: a dynamic adjuster for controlling pitch and loudness of the TTS and a real-time game commentary generator. We conduct a pilot study on a fighting game, and our result shows that by adjusting the loudness significantly according to the level of game highlight, the entertainment of the gameplay can be enhanced.
We propose an approach to generate realistic and high-fidelity stock market data based on generative adversarial networks (GANs). Our Stock-GAN model employs a conditional Wasserstein GAN to capture history dependence of orders. The generator design includes specially crafted aspects including components that approximate the markets auction mechanism, augmenting the order history with order-book constructions to improve the generation task. We perform an ablation study to verify the usefulness of aspects of our network structure. We provide a mathematical characterization of distribution learned by the generator. We also propose statistics to measure the quality of generated orders. We test our approach with synthetic and actual market data, compare to many baseline generative models, and find the generated data to be close to real data.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا