ترغب بنشر مسار تعليمي؟ اضغط هنا

Precisely Analyzing Loss in Interface Adapter Chains

102   0   0.0 ( 0 )
 نشر من قبل Yoo Chung
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Yoo Chung




اسأل ChatGPT حول البحث

Interface adaptation allows code written for one interface to be used with a software component with another interface. When multiple adapters are chained together to make certain adaptations possible, we need a way to analyze how well the adaptation is done in case there are more than one chains that can be used. We introduce an approach to precisely analyzing the loss in an interface adapter chain using a simple form of abstract interpretation.



قيم البحث

اقرأ أيضاً

87 - Yoo Chung , Dongman Lee 2011
Interface adapters allow applications written for one interface to be reused with another interface without having to rewrite application code, and chaining interface adapters can significantly reduce the development effort required to create the ada pters. However, interface adapters will often be unable to convert interfaces perfectly, so there must be a way to analyze the loss from interface adapter chains in order to improve the quality of interface adaptation. This paper describes a probabilistic approach to analyzing loss in interface adapter chains, which not only models whether a method can be adapted but also how well methods can be adapted. We also show that probabilistic optimal adapter chaining is an NP-complete problem, so we describe a greedy algorithm which can construct an optimal interface adapter chain with exponential time in the worst case.
RGBT tracking has attracted increasing attention since RGB and thermal infrared data have strong complementary advantages, which could make trackers all-day and all-weather work. However, how to effectively represent RGBT data for visual tracking rem ains unstudied well. Existing works usually focus on extracting modality-shared or modality-specific information, but the potentials of these two cues are not well explored and exploited in RGBT tracking. In this paper, we propose a novel multi-adapter network to jointly perform modality-shared, modality-specific and instance-aware target representation learning for RGBT tracking. To this end, we design three kinds of adapters within an end-to-end deep learning framework. In specific, we use the modified VGG-M as the generality adapter to extract the modality-shared target representations.To extract the modality-specific features while reducing the computational complexity, we design a modality adapter, which adds a small block to the generality adapter in each layer and each modality in a parallel manner. Such a design could learn multilevel modality-specific representations with a modest number of parameters as the vast majority of parameters are shared with the generality adapter. We also design instance adapter to capture the appearance properties and temporal variations of a certain target. Moreover, to enhance the shared and specific features, we employ the loss of multiple kernel maximum mean discrepancy to measure the distribution divergence of different modal features and integrate it into each layer for more robust representation learning. Extensive experiments on two RGBT tracking benchmark datasets demonstrate the outstanding performance of the proposed tracker against the state-of-the-art methods.
Software effort estimation models are typically developed based on an underlying assumption that all data points are equally relevant to the prediction of effort for future projects. The dynamic nature of several aspects of the software engineering p rocess could mean that this assumption does not hold in at least some cases. This study employs three kernel estimator functions to test the stationarity assumption in five software engineering datasets that have been used in the construction of software effort estimation models. The kernel estimators are used in the generation of nonuniform weights which are subsequently employed in weighted linear regression modeling. In each model, older projects are assigned smaller weights while the more recently completed projects are assigned larger weights, to reflect their potentially greater relevance to present or future projects that need to be estimated. Prediction errors are compared to those obtained from uniform models. Our results indicate that, for the datasets that exhibit underlying nonstationary processes, uniform models are more accurate than the nonuniform models; that is, models based on kernel estimator functions are worse than the models where no weighting was applied. In contrast, the accuracies of uniform and nonuniform models for datasets that exhibited stationary processes were essentially equivalent. Our analysis indicates that as the heterogeneity of a dataset increases, the effect of stationarity is overridden. The results of our study also confirm prior findings that the accuracy of effort estimation models is independent of the type of kernel estimator function used in model development.
Software Repositories contain knowledge on how software engineering teams work, communicate, and collaborate. It can be used to develop a data-informed view of a teams development process, which in turn can be employed for process improvement initiat ives. In modern, Agile development methods, process improvement takes place in Retrospective meetings, in which the last development iteration is discussed. However, previously proposed activities that take place in these meetings often do not rely on project data, instead depending solely on the perceptions of team members. We propose new Retrospective activities, based on mining the software repositories of individual teams, to complement existing approaches with more objective, data-informed process views.
The reaction $^{12}$C + $^{13}$C at 95 MeV bombarding energy is studied using the GARFIELD + Ring Counter apparatus located at the INFN Laboratori Nazionali di Legnaro. In this paper we want to investigate the de-excitation of $^{25}$Mg aiming both a t a new stringent test of the statistical description of nuclear decay and a direct comparison with the decay of the system $^{24}$Mg formed through $^{12}$C+$^{12}$C reactions previously studied. Thanks to the large acceptance of the detector and to its good fragment identification capabilities, we could apply stringent selections on fusion-evaporation events, requiring their completeness in charge. The main decay features of the evaporation residues and of the emitted light particles are overall well described by a pure statistical model; however, as for the case of the previously studied 24Mg, we observed some deviations in the branching ratios, in particular for those chains involving only the evaporation of $alpha$ particles. From this point of view the behavior of the $^{24}$Mg and $^{25}$Mg decay cases appear to be rather similar. An attempt to obtain a full mass balance even without neutron detection is also discussed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا