Financial risk modeling

Financial risk modeling is the use of formal mathematical and econometric techniques to measure, monitor and control the market risk, credit risk, and operational risk on a firm's balance sheet, on a bank's trading book, or re a fund manager's portfolio value; see Financial risk management. Risk modeling is one of many subtasks within the broader area of financial modeling.

Application

Risk modeling uses a variety of techniques including market risk, value at risk (VaR), historical simulation (HS), or extreme value theory (EVT) in order to analyze a portfolio and make forecasts of the likely losses that would be incurred for a variety of risks. As above, such risks are typically grouped into credit risk, market risk, model risk, liquidity risk, and operational risk categories.

Many large financial intermediary firms use risk modeling to help portfolio managers assess the amount of capital reserves to maintain, and to help guide their purchases and sales of various classes of financial assets.

Formal risk modeling is required under the Basel II proposal for all the major international banking institutions by the various national depository institution regulators. In the past, risk analysis was done qualitatively but now with the advent of powerful computing software, quantitative risk analysis can be done quickly and effortlessly.

Criticism

Modeling the changes by distributions with finite variance is now known to be inappropriate. Benoît Mandelbrot found in the 1960s that changes in prices in financial markets do not follow a Gaussian distribution, but are rather modeled better by Lévy stable distributions. The scale of change, or volatility, depends on the length of the time interval to a power a bit more than 1/2. Large changes up or down, also called fat tails, are more likely than what one would calculate using a Gaussian distribution with an estimated standard deviation.[1][2]

Quantitative risk analysis and its modeling have been under question in the light of corporate scandals in the past few years (most notably, Enron), Basel II, the revised FAS 123R and the Sarbanes–Oxley Act, and for their failure to predict the financial crash of 2008.[1][3][4]

Rapid development of financial innovations lead to sophisticated models that are based on a set of assumptions. These models are usually prone to model risk. There are several approaches to deal with model uncertainty. Jokhadze and Schmidt (2018) propose practical model risk measurement framework based on Bayesian calculation.[5] They introduce superposed risk measures that enables consistent market and model risk measurement.

See also

Bibliography

  • Crockford, Neil (1986). An Introduction to Risk Management (2nd ed.). Woodhead-Faulkner. ISBN 0-85941-332-2.
  • Machina, Mark J., and Michael Rothschild (1987). "Risk," The New Palgrave: A Dictionary of Economics, v. 4, pp. 201–206.
  • George Soros (2009). The Crash of 2008 and What it Means: The New Paradigm for Financial Markets. PublicAffairs. ISBN 978-1-58648-699-0.

References

  1. Nassim Nicholas Taleb (2007). The Black Swan: The Impact of the Highly Improbable. Random House. ISBN 978-1-4000-6351-2.
  2. Benoît Mandelbrot and Richard L. Hudson (2006). The Misbehavior of Markets: A Fractal View of Financial Turbulence. Basic Books. ISBN 978-0-465-04357-6.
  3. Alan Greenspan (2008-03-17). "We will never have a perfect model of risk". Financial Times. Retrieved 2009-07-18.
  4. "Financial economics: Efficiency and beyond". The Economist. 2009-07-16. Retrieved 2009-07-18. From The Economist print edition.
  5. Jokhadze, Valeriane; Schmidt, Wolfgang M. (2018). "Measuring model risk in financial risk management and pricing". SSRN. doi:10.2139/ssrn.3113139. S2CID 169594252. {{cite journal}}: Cite journal requires |journal= (help)
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.