ترغب بنشر مسار تعليمي؟ اضغط هنا

Measuring Financial Advice: aligning client elicited and revealed risk

141   0   0.0 ( 0 )
 نشر من قبل John R.J. Thompson
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Financial advisors use questionnaires and discussions with clients to determine a suitable portfolio of assets that will allow clients to reach their investment objectives. Financial institutions assign risk ratings to each security they offer, and those ratings are used to guide clients and advisors to choose an investment portfolio risk that suits their stated risk tolerance. This paper compares client Know Your Client (KYC) profile risk allocations to their investment portfolio risk selections using a value-at-risk discrepancy methodology. Value-at-risk is used to measure elicited and revealed risk to show whether clients are over-risked or under-risked, changes in KYC risk lead to changes in portfolio configuration, and cash flow affects a clients portfolio risk. We demonstrate the effectiveness of value-at-risk at measuring clients elicited and revealed risk on a dataset provided by a private Canadian financial dealership of over $50,000$ accounts for over $27,000$ clients and $300$ advisors. By measuring both elicited and revealed risk using the same measure, we can determine how well a clients portfolio aligns with their stated goals. We believe that using value-at-risk to measure client risk provides valuable insight to advisors to ensure that their practice is KYC compliant, to better tailor their client portfolios to stated goals, communicate advice to clients to either align their portfolios to stated goals or refresh their goals, and to monitor changes to the clients risk positions across their practice.



قيم البحث

اقرأ أيضاً

We simulate a simplified version of the price process including bubbles and crashes proposed in Kreuser and Sornette (2018). The price process is defined as a geometric random walk combined with jumps modelled by separate, discrete distributions asso ciated with positive (and negative) bubbles. The key ingredient of the model is to assume that the sizes of the jumps are proportional to the bubble size. Thus, the jumps tend to efficiently bring back excess bubble prices close to a normal or fundamental value (efficient crashes). This is different from existing processes studied that assume jumps that are independent of the mispricing. The present model is simplified compared to Kreuser and Sornette (2018) in that we ignore the possibility of a change of the probability of a crash as the price accelerates above the normal price. We study the behaviour of investment strategies that maximize the expected log of wealth (Kelly criterion) for the risky asset and a risk-free asset. We show that the method behaves similarly to Kelly on Geometric Brownian Motion in that it outperforms other methods in the long-term and it beats classical Kelly. As a primary source of outperformance, we determine knowledge about the presence of crashes, but interestingly find that knowledge of only the size, and not the time of occurrence, already provides a significant and robust edge. We then perform an error analysis to show that the method is robust with respect to variations in the parameters. The method is most sensitive to errors in the expected return.
The underlying idea behind the construction of indices of economic inequality is based on measuring deviations of various portions of low incomes from certain references or benchmarks, that could be point measures like population mean or median, or c urves like the hypotenuse of the right triangle where every Lorenz curve falls into. In this paper we argue that by appropriately choosing population-based references, called societal references, and distributions of personal positions, called gambles, which are random, we can meaningfully unify classical and contemporary indices of economic inequality, as well as various measures of risk. To illustrate the herein proposed approach, we put forward and explore a risk measure that takes into account the relativity of large risks with respect to small ones.
This paper studies the estimation of network connectedness with focally sparse structure. We try to uncover the network effect with a flexible sparse deviation from a predetermined adjacency matrix. To be more specific, the sparse deviation structure can be regarded as latent or misspecified linkages. To obtain high-quality estimator for parameters of interest, we propose to use a double regularized high-dimensional generalized method of moments (GMM) framework. Moreover, this framework also facilitates us to conduct the inference. Theoretical results on consistency and asymptotic normality are provided with accounting for general spatial and temporal dependency of the underlying data generating processes. Simulations demonstrate good performance of our proposed procedure. Finally, we apply the methodology to study the spatial network effect of stock returns.
We test the hypothesis that interconnections across financial institutions can be explained by a diversification motive. This idea stems from the empirical evidence of the existence of long-term exposures that cannot be explained by a liquidity motiv e (maturity or currency mismatch). We model endogenous interconnections of heterogenous financial institutions facing regulatory constraints using a maximization of their expected utility. Both theoretical and simulation-based results are compared to a stylized genuine financial network. The diversification motive appears to plausibly explain interconnections among key players. Using our model, the impact of regulation on interconnections between banks -currently discussed at the Basel Committee on Banking Supervision- is analyzed.
We develop tools for utilizing correspondence experiments to detect illegal discrimination by individual employers. Employers violate US employment law if their propensity to contact applicants depends on protected characteristics such as race or sex . We establish identification of higher moments of the causal effects of protected characteristics on callback rates as a function of the number of fictitious applications sent to each job ad. These moments are used to bound the fraction of jobs that illegally discriminate. Applying our results to three experimental datasets, we find evidence of significant employer heterogeneity in discriminatory behavior, with the standard deviation of gaps in job-specific callback probabilities across protected groups averaging roughly twice the mean gap. In a recent experiment manipulating racially distinctive names, we estimate that at least 85% of jobs that contact both of two white applications and neither of two black applications are engaged in illegal discrimination. To assess the tradeoff between type I and II errors presented by these patterns, we consider the performance of a series of decision rules for investigating suspicious callback behavior under a simple two-type model that rationalizes the experimental data. Though, in our preferred specification, only 17% of employers are estimated to discriminate on the basis of race, we find that an experiment sending 10 applications to each job would enable accurate detection of 7-10% of discriminators while falsely accusing fewer than 0.2% of non-discriminators. A minimax decision rule acknowledging partial identification of the joint distribution of callback rates yields higher error rates but more investigations than our baseline two-type model. Our results suggest illegal labor market discrimination can be reliably monitored with relatively small modifications to existing audit designs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا