Neither ratio can distinguish between intermittent and consecutive losses. We are research group from Saint- Petersburg, Moscow, London and Sydney, who conduct advanced quantitative research in the fields of Economics, Finance and business Analytics. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. When adding the lag of the conditional volatility, the findings are slightly less clear cut. Hansen & Sargent achieve robustness by working with a neighborhood of the reference model and maximizing the We examine the fundamental trading of economic and social powers among agents, and draw on well-known methods of game theory for simulating and analysing outcomes to these interactions. 3 Faculty of Philosophy at Erasmus University Rotterdam, and Erasmus Institute for Philosophy and Economics Research Master Philosophy and Economics: Thesis Title: “On the Robustness of Economic Models” Author: Johanna Marie Thoma, BA (Hons.) (2002b). Also reported in Table 6 are the variance ratio and variance reduction. In principle, the cost of capital analyst could try to forecast how rapidly capital market conditions will return to “normal,” but in practice this would add controversy to the already controversial topic of how to estimate the cost of capital at any given time. In contrast, in the absolutist view, a model would be considered useful for prediction only if it were not rejected on statistical grounds, even though non-rejection does not necessarily imply predicted effects will be close to actual effects. The first is the view that knowledge is absolute, that is, there exists a “true” decision-theoretic model from which observed data are generated. table with several different specifications: which variables are Models are chosen that are “best” for some specific purpose; alternative models may be valid for different purposes. E.C. Part 1 Robustness analysis. 6:15 Implications of conclusions based on a sample. As a robustness test and in order to deal with potential issues of endogeneity bias, we also employ a panel-VAR model to examine the relationship between bank management preferences and various banking sector characteristics.19 The main advantage of this methodology is that all variables enter as endogenous within a system of equations, which enables us to reveal the underlying causality among them.20 We specify a panel-VAR model where the key variable is alpha, the shape parameter of the managerial behavior function; we also include the main right side variables of the previous section. It turns out that, for all the window lengths, the three indicators of market condition have the right sign and are statistically strongly significant, confirming in this the previous results obtained with the wide window. The cumulative abnormal return conditional volatility for different windows. The book also discusses Note: Figure presents impulse response functions (IRFs), which show the responses of a variable of interest to a shock of one plus/minus standard deviation of the same variable or another variable within the panel-VAR. If the financial crisis increases the cost of capital, failure to recognize this increase shortchanges investors. These assumptions, which include the structural specification of the model and the values of its … In these papers the authors tend to examine We may also expect, however, that firms will not get financed in the latter case where the venture capitalist’s preplanned exit strategy is toward an acquisition and an entrepreneur does not want to give up control rights. Decision-theoretic models are typically designed and estimated with the goal of predicting the impact on economic agents of changes in the economic environment. Can one provide convincing evidence about the credibility of these exercises? There are other sense of robust that are often used and are somewhat related: robust to heteroskedasticity or autocorrelation, outliers, and various assumption violations (like error distributions). Should hardwood floors go all the way to wall under kitchen cabinets? economic models is essentially a form of robustness analysis. Given a solution β̭(τ), based on observations, {y, X}, as long as one doesn't alter the sign of the residuals, any of the y observations may be arbitrarily altered without altering the initial solution. all that this may imply for policy analysis and economic insight. Or begin with a smaller model and add? A much smaller negative effect is observed in the case of the Herfindahl Index. Fig. only a few representative specifications, but there is no reason why One consideration is whether the instability is generally expected to abate during the regulatory period. This procedure is applied to two cases in which the US is the domestic market: one producing a highly effective hedge (against the UK) and another producing a less effective hedge (against Japan). We have no reason to believe the variables considered in this chapter are incomplete, although more detailed data and/or a greater volume of data could shed further light on the issues raised. It is possible that other confidential data are relevant, but inclusion/exclusion of our control variables did not point to any pronounced concerns about robustness of the tests of the central hypotheses considered. As such, all individual models can be, and often are, subject to some instability over time. In this pragmatic view, there is no true decision-theoretic model, only models that perform better or worse in addressing particular questions. Mamatzakis, ... Mike G. Tsionas, in Panel Data Econometrics, 2019. In Section 3, drawing on a model in population ecology, I explain how robustness anal-ysis differs from de-idealization. In economics, robustness is the ability of a financial trading system to remain effective under different markets and different market conditions, or the ability of an economic model to remain valid under different assumptions, parameters and initial conditions.. Ghosh (1993) concluded that a smaller than optimal futures position is undertaken when the cointegrating relation is unduly ignored, attributing the under-hedge results to model misspecification. We argued that both themes yielded similar predictions which were supported in the data. In your opinion, do you think it makes more sense to start with a larger model (including core covariates, and others) and then show that the core covariates don't change when removing some of the "others". From: Risk and Return for Regulated Industries, 2017, R. Koenker, in International Encyclopedia of the Social & Behavioral Sciences, 2001. 2008, 2015) respond to this di culty by using robust control theory, which they relate to work on ambiguity in decision theory, including Gilboa and Schmeidler (1989), Maccheroni et al. Robust statistics are statistics with good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal.Robust statistical methods have been developed for many common problems, such as estimating location, scale, and regression parameters.One motivation is to produce statistical methods that are not unduly affected by outliers. Is it true that if one coefficient in a linear model is endogenous, then any individual coefficient will be inconsistent? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. (2001) suggested that the hedge ratio should be estimated from a nonlinear model, which can be written in first differences as: Nonlinear error correction models have also been suggested (not necessarily for estimating the hedge ratio) by Escribano (1987), and the procedure is applied to a model of the demand for money in Hendry and Ericcson (1991). Setting rates based on a transitory blip (up or down) in the cost of capital can lead to rates that will be expected to provide too much or too little return over most of the rate's life (before the next rate setting). used. Robustness analysis: a philosophical state of the art The discussion of robustness analysis in philosophy of science starts with Richard Levin’s classic statement in 1968 and William Wimsatt’s subsequent elaboration in 1981. This type of analysis was severely criticised in an influential article by Levine and Renelt (1992) for its perceived lack of robustness. Out-of-sample validation: Out-of-sample validation relies on there being sample data not used in estimation, but that is assumed to come from the same underlying population. The problem with basing validation on model fit is that, like nonstructural estimation, model building is an inductive as well as deductive exercise. To learn more, see our tips on writing great answers. Whatever empirical approach to inference is adopted, structural or nonstructural, researchers should strive to provide as much validation evidence as the data and methods permit. Unbalanced Panel: pooled OLS vs FE vs RE - which method yields unbiased and robust estimators? As we have illustrated, applications of the DCDP approach have addressed challenging and important questions often involving the evaluation of counterfactual scenarios or policies. Put differently, how can DCDP models be validated and choices be made among competing models? However, there may theoretically be cases in which the entrepreneur faces a trade-off when he knows the venture capitalists preplanned exit strategy is an acquisition: if he gives the venture capitalist more control, the firm is going to have a higher exit value but at the same time he loses his private benefits; if he gives the venture capitalist less control, the firm is going to have a lower exit value but the entrepreneur is able to retain his private benefits. Fig. different models are used. Origin of the symbol for the tensor product. Downloadable (with restrictions)! For exam-ple, of the 98 papers published in The American Economic Review during 2009, 76 involve some data analysis. Interesting! Academia.edu is a platform for academics to share research papers. The second robustness test is to use the hedging approach while calculating the hedge ratio by using various models. The final specification results from a process in which the model structure is revised as estimation proceeds, by adding parameters and changing functional forms, as deficiencies in model fit are discovered. An example of such an approach may be to have a hearing at which only the cost of capital is reset, as opposed to an entire regulatory proceeding.10 Setting rates on a yearly basis is a good example of an approach that mitigates the concerns of volatility in the underlying true cost of capital. Bente Villadsen, ... A. Lawrence Kolbe, in Risk and Return for Regulated Industries, 2017. interesting parameter is not very sensitive to the exact specification Which game is this six-sided die with two sets of runic-looking plus, minus and empty sides from? Hence, it does not properly reflect the impact of time and does not reward long-term performance. As advocated by Bird et al. D. Wade Hands Derivational Robustness, Credible Substitute Systems, and Mathematical Economic Models: The Case of Stability Analysis in Walrasian General … Of course the difficult thing is giving operational meaning to the words small and large, and, concomitantly, framing the model in a way sufficiently well-delineated to admit such quantifications (however approximate). Lars Peter Hansen, Thomas J. Sargent, in Handbook of Monetary Economics, 2010. Imad Moosa, Vikash Ramiah, in Emerging Markets and the Global Economy, 2014. Thus, robust control and prediction combines Bayesian learning (about an unknown state vector) with robust control, while adaptive control combines flexible learning about parameters with standard control methods. Nor will non-rejected models necessarily outperform rejected models in terms of their (context-specific) predictive accuracy. This highly accessible book presents the logic of robustness testing, provides an operational definition of robustness that can be applied in all quantitative research, and introduces readers to diverse types of robustness tests. 5.11 Adaptive control versus robust control. For VIX and Market, it seems that their significance depends on the window length, although the direction of the impact is the expected one (positive). Robustness Checks: Accounting for CSR Event Type. This validation sample can have a number of sources. We presented many robustness checks in Section 12.4 with a wide variety of explanatory variables and dependent variables. Interestingly, when the uncertainty surrounding the impact of CSR is concerned, the CSR event type seems to be of little importance, if any. Only in 6 of 223 cases were differences observed (where the syndicated investor used common equity or warrants when the respondent investor used a security involving debt and/or preferred equity). Figure 6.3. We examine the ways in which environments condition the degrees of freedom in agents’ behaviours, including their need for constraint in contrast to their need for liberty. Then I test down a general variant of that specification that encompass rival theories. (2006a), Klibano et al. The formula of the Sharpe ratio is: with R¯ the annualized return of the trading rule, Rf, the annualized risk free returns of the asset under management, and σR annualized standard deviation of (daily) rule returns. To be able to perform such counterfactual analyses in such a variety of settings, DCDP models must rely on extra-theoretic modeling choices, including functional form and distributional assumptions. If the unusual circumstances are instead believed to be temporary, the regulator may wish to take this into account in setting rates that will be reasonable over the entire regulatory period. Further empirical work in this regard might also consider sources of funds in the spirit of Mayer et al. It can be defined as: with N number of (daily) observations, R the average (daily) rule returns and σR the standard deviation of (daily) rule returns. It is not only about the use of an error correction model as opposed to a first-difference model, as various other model specifications have been suggested to estimate the hedge ratio. Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Indeed, an approach that fails to reflect such underlying instability risks violating the goals for a reasonable return on capital discussed in Chapter 2. As should be clear from this discussion, model validation, and model building more generally, are part art and part science. Where did the concept of a (fantasy-style) "dungeon" originate? Its popularity is due in part to its simplicity as well as its intuitive appeal. One source for the validation sample is based on regime shifts. I would also add that the effect may change when you alter the covariates or the sample, but it should do so in a predictable and theoretically consistent manner to be called robust. (1992), for example, estimated a model of the retirement behavior of workers in a single firm who were observed before and after the introduction of a temporary one-year pension window. The robustness of an initial decision is an operational measure of the flexibility which that commitment will leave for useful future decision choice. 3, the effect of a one standard deviation shock of the domestic credit to the private sector ratio on alpha is negative, as is the sovereign risk variable. The ambitiousness of the research agenda that the DCDP approach can accommodate is a major strength. Interestingly, the smaller the event's window, the greater the conditional volatility. Nevertheless, it is interesting to note that formal tests generally reject DCDP models. Several proposals have been made to ameliorate this effect. The chapter introduces difficulties in seeking optimal solutions to the problems of distribution, especially where agents have formed interest groups, and outline some methods for achieving effective decisions in the face of bias and prejudice. ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. URL: https://www.sciencedirect.com/science/article/pii/B0080430767004563, URL: https://www.sciencedirect.com/science/article/pii/B9780444534545000086, URL: https://www.sciencedirect.com/science/article/pii/B9781843347514500054, URL: https://www.sciencedirect.com/science/article/pii/S0169721811004102, URL: https://www.sciencedirect.com/science/article/pii/B9780128158593000251, URL: https://www.sciencedirect.com/science/article/pii/B9780124115491000296, URL: https://www.sciencedirect.com/science/article/pii/B9780128036150000066, URL: https://www.sciencedirect.com/science/article/pii/B9780128125878000034, URL: https://www.sciencedirect.com/science/article/pii/B9780750655163500121, URL: https://www.sciencedirect.com/science/article/pii/B9780124095373000128, Risk and Return for Regulated Industries, 2017, International Encyclopedia of the Social & Behavioral Sciences, The Structural Estimation of Behavioral Models: Discrete Choice Dynamic Programming Methods and Applications, Michael P. Keane, ... Kenneth I. Wolpin, in, Making Inference of Bank Managerial Preferences About Performance: A Panel Analysis, Emerging Market Stocks in Global Portfolios: A Hedging Approach, Corporate Social Responsibility and Macroeconomic Uncertainty, Handbook of Environmental and Sustainable Finance, Bente Villadsen, ... A. Lawrence Kolbe, in, Informative spillovers in the currency markets: a practical approach through exogenous trading rules, Venture Capital and Private Equity Contracting (Second Edition), Keuschnigg and Nielsen, 2001, 2003a,b, 2004a,b, Physica A: Statistical Mechanics and its Applications, The Cochrane-Orcutt method with an AR(2) process in the residuals, Maximum likelihood with an MA(2) process in the residuals, Instrumental variables with an AR(3) process in the residuals, Autoregressive distributed lag model in first differences, OLS (the hedge ratio is the coefficient on the contemporaneous explanatory variable), OLS (the hedge ratio is the long-run coefficient calculated from the impact coefficients). Looking at the first row of Fig. A model is deemed invalid if it is rejected according to some statistical criterion. In both settings, robust decision making requires the economic agent or the econometrician to explicitly allow for the risk of misspecification. The estimation results are presented in Table 6, which reports the estimated value of the hedge ratio, its t statistic, and the coefficient of determination. More recently, the robustness criterion adopted by Levine The independent variables are the lagged volatility, the event type and an indicator of the market conditions, that is VIX in Panel A, market volatility in Panel B, and ICS in Panel C. For each regression we report three tests of the presence of a unit root in the residual of the regressions. At times, I have used regularization on a less carefully selected set of variables. Variables within the panel-VAR are estimated alphas by country and by year (from Tables 5 and 6). Table 6. Visualize a polyline with decreasing opacity towards its ends in QGIS. External links. With all this said, it is our experience that rate regulation tends to adapt to changes in the cost of capital with a lag. This strength is purchased at a cost. This process of repeated model pre-testing invalidates the application of standard formal statistical tests. Finally Section 6 studies robust quadratic classiÞ cation analysis. Jamie O’Brien, in Shaping Knowledge, 2014. Asking for help, clarification, or responding to other answers. Section 5 considers robust ways of reducing the dimension for high-dimensional data. If estimates seem high or low by historical standards, the analyst should try to understand why. Various attempts have been made to design a modifiedmeasure to overcome this shortcoming, but as to date such proposals have been unable to retain the simplicity of the t-statistic and the Sharpe ratio, which has impeded their acceptance and implementation. We note that this is not only a modeling issue, but also a policy issue. A separate, though related, issue is how the regulator should respond when the true underlying cost of capital enters a volatile period, for example, following the recent financial crisis. Randomized social experiments have also provided opportunities for model validation and selection. Personally, I use economic theory to pick a preferred specification that is relatively parsimonious. The estimates and the associated inference apparatus have an inherent distribution-free character since quantile estimation is influenced only by the local behavior of the conditional distribution of the response near the specified quantile. (2008) and Moosa (2011). In the experiment, families that met an income eligibility criterion were randomly assigned to control and treatment groups. The effect of a one standard deviation shock of the Fraser regulation index on alpha is negative; the same applies for the z-score variable.22 Table 11 presents VDCs and reports the total effect accumulated over 10 and 20 years. Although these models tend to have a lot of parameters, sometimes numbering into the hundreds, given the extensiveness of the data moments that these models attempt to fit, the models are actually parsimonious. Kuorikoski, Jaakko; Lehtinen, Aki; Marchionni, Caterina (2007-09-25). Estimation results with nine model specifications for the Hedge ratio. While quantile regression estimates are inherently robust to contamination of the response observations, they can be quite sensitive to contamination of the design observations, {xi}. In general, all models discussed here have characteristics that make them more or less suited to one economic environment versus another. Table 6.5. 3 presents the IRFs diagrams for the second set of variables under examination: alpha, the Herfindahl Index, the ratio of domestic credit to the private sector, and the sovereign risk variable. Specifically, if p and p∗ are related by the long-run relation: and if they are cointegrated such that εt∼I(0), then equation (6) is misspecified and the correctly specified model is an error correction model of the form: where θ is the coefficient on the error correction term, which should be significantly negative for the model to be valid. Can "vorhin" be used instead of "von vorhin" in this sentence? A better alternative might be to set rates on the current estimates and provide an efficient mechanism by which rates can be adjusted as the cost of capital returns to a more normal state. Kroner and Sultan (1993) used a bivariate GARCH error correction model to account for both nonstationarity and time-varying moments. The validation sample was purposely drawn from a state in which welfare benefits were significantly lower than in the estimation sample. The “suburb” type happens to be the most important one with a negative impact on the uncertainty. This chapter outlines a range of key issues in agent behaviours, including the mental life, beliefs, drives and patterns of randomness that influence these behaviours. (2002a)Manigart et al. The second approach is based on a pragmatic epistemological view, in which it is acknowledged that all models are necessarily simplifications of agents’ actual decision-making behavior. The independent variables are the lagged volatility, the event type (Type), and an indicator of the market conditions, that is VIX in Panel A, Market volatility in Panel B, and ICS in Panel C. All the coefficients have been multiplied by 100 for readability. Can I use deflect missile if I get an ally to shoot me? Looking at evidence from a number of models remains the best practice. In your econometrics class you learn all sorts of analytic tools: ordinary least squares, fixed effects, autoregressive processes, and many more. Note: Figure presents impulse response functions (IRFs), which show the response of a variable of interest to a shock of one plus/minus standard deviation of the same variable or another variable within the panel-VAR. HHI = logarithm of Herfindahl Index; DCPC = logarithm of the domestic credit to the private sector as a percent of GDP; sovereign = sovereign lending rate. The latter were offered a rent subsidy. In Panel A of Table 6.4 we present the results of the regression analysis when only the dependent variable is included in the regression. 2. so on. Is it the case that the cost of capital has changed significantly, or is it a problem with the models and how they are implemented in the current environment? Many situations are subject to the “law” of diminishing marginal benefits and/or increasing marginal costs, which implies that the impact of the independent variables won’t be constant (linear). For each regression we report three tests of the presence of a unit root in the residual of the regressions. Third, other variables considered but not explicitly reported included portfolio size per manager and tax differences across countries (in the spirit of Kanniainen and Keuschnigg, 2003, 2004Kanniainen and Keuschnigg, 2003Kanniainen and Keuschnigg, 2004; Keuschnigg, 2004; Keuschnigg and Nielsen, 2001, 2003a,b, 2004a,bKeuschnigg and Nielsen, 2001Keuschnigg and Nielsen, 2003aKeuschnigg and Nielsen, 2003bKeuschnigg and Nielsen, 2004aKeuschnigg and Nielsen, 2004b). Further theoretical work in the spirit of Casamatta and Haritchabalet (2007) and empirical work in the spirit of Lerner (1994a,b)Lerner (1994a)Lerner (1994b), Lockett and Wright (2001), and Gompers (1995) could consider staging and syndication vis-à-vis preplanned exits; those topics are beyond the scope of this chapter. Hendry and Ericcson (1991) suggest that a polynomial of degree three in the error correction term is sufficient to capture the adjustment process.