ترغب بنشر مسار تعليمي؟ اضغط هنا

Happy places or happy people? A multi-level modelling approach to the analysis of happiness and well-being

222   0   0.0 ( 0 )
 نشر من قبل Dimitris Ballas
 تاريخ النشر 2012
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper aims to enhance our understanding of substantive questions regarding self-reported happiness and well-being through the specification and use of multi-level models. To date, there have been numerous quantitative research studies of the happiness of individuals, based on single-level regression models, where typically a happiness index is related to a set of explanatory variables. There are also several single-level studies comparing aggregate happiness levels between countries. Nevertheless, there have been very few studies that attempt to simultaneously take into account variations in happiness and well-being at several different levels, such as individual, household, and area. Here, multilevel models are used with data from the British Household Panel Survey to assess the nature and extent of variations in happiness and well-being to determine the relative importance of the area (district, region), household and individual characteristics on these outcomes. Moreover, having taken into account the characteristics at these different levels in the multilevel models, the paper shows how it is possible to identify any areas that are associated with especially positive or negative feelings of happiness and well-being.



قيم البحث

اقرأ أيضاً

This article uses data of subjective Life Satisfaction aggregated to the community level in Canada and examines the spatial interdependencies and spatial spillovers of community happiness. A theoretical model of utility is presented. Using spatial ec onometric techniques, we find that the utility of community, proxied by subjective measures of life satisfaction, is affected both by the utility of neighbouring communities as well as by the latters average household income and unemployment rate. Shared cultural traits and institutions may justify such spillovers. The results are robust to the different binary contiguity spatial weights matrices used and to the various econometric models. Clusters of both high-high and low-low in Life Satisfaction communities are also found based on the Morans I test
What is driving the accelerated expansion of the universe and do we have an alternative for Einsteins cosmological constant? What is dark matter made of? Do extra dimensions of space and time exist? Is there a preferred frame in the universe? To whic h extent is left-handedness a preferred symmetry in nature? Whats the origin of the baryon asymmetry in the universe? These fundamental and open questions are addressed by precision experiments using ultra-cold neutrons. This year, we celebrate the 50th anniversary of their first production, followed by first pioneering experiments. Actually, ultra-cold neutrons were discovered twice in the same year, once in the eastern and once in the western world. For five decades now research projects with ultra-cold neutrons have contributed to the determination of the force constants of natures fundamental interactions, and several technological breakthroughs in precision allow to address the open questions by putting them to experimental test. To mark the event and tribute to this fabulous object, we present a birthday song for ultra-cold neutrons with acoustic resonant transitions, which are based solely on properties of ultra-cold neutrons, the inertial and gravitational mass of the neutron, Plancks constant, and the local gravity. We make use of a musical intonation system that bears no relation to basic notation and basic musical theory as applied and used elsewhere but addresses two fundamental problems of music theory, the problem of reference for the concert pitch and the problem of intonation.
Modelling disease progression of iron deficiency anaemia (IDA) following oral iron supplement prescriptions is a prerequisite for evaluating the cost-effectiveness of oral iron supplements. Electronic health records (EHRs) from the Clinical Practice Research Datalink (CPRD) provide rich longitudinal data on IDA disease progression in patients registered with 663 General Practitioner (GP) practices in the UK, but they also create challenges in statistical analyses. First, the CPRD data are clustered at multi-levels (i.e., GP practices and patients), but their large volume makes it computationally difficult to implement estimation of standard random effects models for multi-level data. Second, observation times in the CPRD data are irregular and could be informative about the disease progression. For example, shorter/longer gap times between GP visits could be associated with deteriorating/improving IDA. Existing methods to address informative observation times are mostly based on complex joint models, which adds more computational burden. To tackle these challenges, we develop a computationally efficient approach to modelling disease progression with EHRs data while accounting for variability at multi-level clusters and informative observation times. We apply the proposed method to the CPRD data to investigate IDA improvement and treatment intolerance following oral iron prescriptions in primary care of the UK.
Two of the main problems encountered in the development and accurate validation of photometric redshift (photo-z) techniques are the lack of spectroscopic coverage in feature space (e.g. colours and magnitudes) and the mismatch between photometric er ror distributions associated with the spectroscopic and photometric samples. Although these issues are well known, there is currently no standard benchmark allowing a quantitative analysis of their impact on the final photo-z estimation. In this work, we present two galaxy catalogues, Teddy and Happy, built to enable a more demanding and realistic test of photo-z methods. Using photometry from the Sloan Digital Sky Survey and spectroscopy from a collection of sources, we constructed datasets which mimic the biases between the underlying probability distribution of the real spectroscopic and photometric sample. We demonstrate the potential of these catalogues by submitting them to the scrutiny of different photo-z methods, including machine learning (ML) and template fitting approaches. Beyond the expected bad results from most ML algorithms for cases with missing coverage in feature space, we were able to recognize the superiority of global models in the same situation and the general failure across all types of methods when incomplete coverage is convoluted with the presence of photometric errors - a data situation which photo-z methods were not trained to deal with up to now and which must be addressed by future large scale surveys. Our catalogues represent the first controlled environment allowing a straightforward implementation of such tests. The data are publicly available within the COINtoolbox (https://github.com/COINtoolbox/photoz_catalogues).
Because of its mathematical tractability, the Gaussian mixture model holds a special place in the literature for clustering and classification. For all its benefits, however, the Gaussian mixture model poses problems when the data is skewed or contai ns outliers. Because of this, methods have been developed over the years for handling skewed data, and fall into two general categories. The first is to consider a mixture of more flexible skewed distributions, and the second is based on incorporating a transformation to near normality. Although these methods have been compared in their respective papers, there has yet to be a detailed comparison to determine when one method might be more suitable than the other. Herein, we provide a detailed comparison on many benchmarking datasets, as well as describe a novel method to assess cluster separation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا