
What is capital? The concept is central to theoretical economics, but also to applications of economic theory in industry and policy. One would thus assume that there is at least a tentative agreement regarding the meaning of the word, whether it be Karl Marx’s das Kapital or something else. However, the history of theoretical economics suggests otherwise. Between the 1950s and the 1970s in the ivory covered towers of two Cambridges, separated both by the Atlantic ocean and their approaches to fundamental economics, the Cambridge Capital Controversy (CCC) exposed deep frictions within the field of economics surrounding the meaning and definition of capital.1 In the United Kingdom, Joan Robinson and Piero Sraffa kicked off the battle with a series of publications problematizing core concepts in Neoclassical understandings of economic growth factors and characterizations.2 In the United States of America, Paul Samuelson and Robert Solow fought to maintain the supremacy of the Neoclassical approach against these allegations of theoretical breakdown and conceptual inconsistencies levied by their British counterparts.
What occurred in this twenty-year period has been both largely ignored, and deeply impactful. What were the main points of disagreement between the two camps? How has Neoclassical economics maintained its dominant theoretical position? What can we learn about the relationship between empirical data and pre-theoretical commitments in economics? Can we even define capital, at all? Let us try now to shed some light on these questions, and in doing so shed some light on the context and impact of the CCC.
What’s at stake?
What sits at the core of this controversy is a deceptively simple equation, the Cobb-Douglas function, introduced in 1928:
Y = AKαLβ (1)
In this equation Y, the total output (GPD, firm output, etc.), is modeled as a function of total factor productivity (A), capital (K), labor (L), and the respective output elasticities α and β.3 The Cobb-Douglas function attempts to represent production growth and hinges on the assumption that one can robustly model capital (K) as a single, homogenous factor, an assumption central to the dominant Neoclassical capital theory and associated marginal productivity theory.4 While admittedly this talk of abstract mathematical economics might not be all too sexy to a historian’s ears, the question of whether we can model K in this fashion is intimately related to whether one can substantiate the claim that workers receive their fair share of the profit they help to create for capital owners. With that juicy bait, stick with me while I give you one more equation, the marginal productivity conditions:
∂F(K,L)/∂K = r+δ (2)
and,
∂F(K,L)/∂L = w (3)
In these equations, r is the rate of return on capital (interest rate), δ is the depreciation rate, w is the wage rate, and F(K,L) is the production function (how much output Y from equation (1) is produced from K and L). The consequence of these equations, if we assume that K is a single, homogenous factor, is that each unit of capital (K), and labor (L), is paid its relative marginal product (i.e. the change in output resulting from each increased additional input). If this is the case, the free market is fair and Marxist critiques of extractive economic systems are dead in the water. Capital need not be defined by its social or historical contexts. If there is any economic inequality, this is simply a reflection of differences in productivity. Long live the meritocracy. However, if K cannot be modelled as a single, homogenous factor, then it is not clear how F/K can be solved, and the simplicity of determining relative returns disappears, taking with it the reliability of relative marginal products as indicative of economic prosperity, equality and growth. And thus, the equations come to life.
Who’s at Battle?
The first blow of the CCC was dealt in 1953 when Joan Robinson of the UK delegation (Cambridge 1) published a paper attacking the core idea that K could be treated as a single, homogenous quantity.5 Her criticism was based on the straightforward observation that the things we consider capital are far too diverse to be aggregated in this way. Machines, buildings, software, and knowledge are not the same. They each have their own characteristics, risks, and relative measurements. Robinson’s criticism set off a chain reaction of papers, being immediately followed by further critiques of Neoclassicism by Richard Kahn in 1956, Piero Sraffa in 1960, Nicholas Kaldor in 1961, and Luigi Pasinetti in 1962 and 1966. Cambridge 2 responded in kind with a series of defenses, primarily those of Robert Solow in 1956, Paul Samuelson in 1962 and 1966, and Franco Modigliani in 1966.
Sraffa’s 1960 work was particularly impactful on the CCC. His Production and Commodities by Means of Commodities was a robust theory of pricing and distribution that directly challenged Neoclassicism.6 He and his followers committed themselves to identifying logical contradictions in the foundations of Neoclassical theory, and in 1966 they succeeded in their identification attempts.
It’s necessary for us to again consider some equations (I’m sorry). Remember equation (1), the Cobb-Douglas function? Well, Cambridge 2 constituent, Robert Solow, introduced a slightly modified version of this function in 1956 that allowed for dynamic capital accumulation, the Solow Model:7
Y = AKαL1−α (4)
Using this model, the Neoclassicists developed an approach to determining the real interest rate r through the employment of the marginal productivity of capital function:
MPK = ∂Y/∂K = αAKa−1L1−a (5)
This equation assumes that a given firm would invest in capital until the marginal productivity of capital (MPK) equals the real interest rate plus depreciation δ:
r+ δ= MPK= αAKa−1L1−a (6)
These assumptions and associated equations codify the notion that if the real interest rate r is higher, then a given firm is going to prioritize less capital-intensive techniques, and if the real interest rate r is lower, then a given firm is going to prioritize more capital-intensive techniques, resulting in a very smooth inverse relationship between capital intensity and interest rates. This requires a non-ambiguous and simple conceptualization of capital. This smooth inverse relationship proved to be a rich target for the actors from Cambridge 1, as ostensibly one could simply look at real-world empirical data to determine whether or not capital and interest exhibited this relationship.
In his 1966 publication, Pasinetti presented the first model clearly exhibiting reswitching.8 If the Neoclassical assumption were correct, then we would expect something like the following:
Interest Rate (r) | Optimal Technique to Maximize Returns |
High r | A |
Medium r | B |
Low r | C |
Where techniques A, B, and C represent specific approaches to balance labor- and capital-intensity in production approaches. Because capital intensity and interest rates are supposed to exhibit a smooth and inverse relationship, technique A should be less capital-intensive than technique B and far less capital-intensive than technique C. We should certainly never expect something like:
Interest Rate (r) | Optimal Technique to Maximize Returns |
High r | A |
Medium r | B |
Low r | A |
Where technique A is employed in both high and low-interest-rate scenarios. However, this is exactly what was found by Pasinetti and is known as reswitching: when the optimal technique for maximizing returns can be the same in a high interest rate and low interest rate scenario. This discovery cut deep into the theoretical assumptions of Neoclassicism, so much so that in 1966 Paul Samuelson was forced to acknowledge that reswitching was indeed possible, at least theoretically, and that Neoclassical theory was flawed, at least a little bit.9
With this publication, Cambridge 2 appeared to admit defeat: Neoclassical theory had been undermined and the criticisms levied by Cambridge 1 had been vindicated. The final blow against the Neoclassicist approach came in 1970 when Pierangelo Garegani, from the University of Florence, published an expansion and refinement of Sraffa’s work, proving quite conclusively that the Neoclassical approach to capital aggregation was fundamentally inconsistent due to reswitching.10 Ostensibly, this should be a case closed. We have clear winners and clear losers – however, things are not so simple.
Who Won?
Perhaps the most interesting outcome of the CCC is that in some ways, the loser was really the winner. After reswitching was identified, it became impossible to ignore the veracity of the Cambridge 1 arguments. Reswitching simply falsified the claim that capital and interest are simply related. One would thus reasonably assume that the Neoclassicists were the losers of this whole event – that would be wrong, however. To this day, the Neoclassical school has maintained its dominance despite these fundamental theoretical blows. To understand why, we have to take a closer look at the larger changes impacting both academic and applied economic theory during this period.
Arguably the most consequential change to economics in the 20th century was the introduction of high-powered computing and the quantitative turn in economic modeling. In the early 20th century, economics was still a very discursive and historical discipline, staying true to the philosophical and deductive traditions coming from the pioneers of early economic thought, Adam Smith, David Ricardo, and John Stuart Mill. Starting in the 19th century, with the marginal revolution and the introduction of the formalization of general equilibrium theory, the study of economics shifted towards quantitative analysis.11 The first half of the 20thcentury saw a continuation of this trend with the introduction of the aforementioned Cobb-Douglas function and econometrics. This shift was fully cemented with the 1947 publication of Samuelson’s ‘Foundations of Economics Analysis’, a treatise on the new mathematical approach to economics.12
When the Second World War erupted, an acute need for optimization rippled across the globe. Whether it be the allocation of resources, the development of military strategy, codebreaking, or the control of inflation, sophisticated practical, mathematical and computational approaches to problem-solving were developed and widely applied during the tumultuous years.13 During this period, Samuelson and Milton Friedman were employed by the U.S. government to assist in the creation of large-scale economic plans targeted at mitigating the economic hardships that come with war. Agencies such as the U.S. Office of Price Administration and the War Production Board relied heavily on statistical modelling to predict the consequences of their economic interventions, further cementing the importance of quantitative approaches to economics in the 20th century.14 These developments were not confined to war rooms – the economists, logicians, computer scientists and mathematicians continued to contribute to their respective fields through publications and lectureships, thus disseminating this new quantized approach far and wide during this period.
As the specter of war faded in the 1950s, some theoretical economists were itching to explore, expand and apply this new approach to economics, others were more concerned with taking stock of what kind of changes had taken hold within the field. The CCC can be understood as a consequence of this post-war perusal of changes to economic theory. While, as we have seen, ultimately the challengers of Cambridge 1 were successful in their attempts to problematize the simple, homogenous understanding of K and in doing so revealed deep flaws in the core set of assumptions guiding Neoclassical theory, these criticisms held little weight against an approach that was proving to be wildly successful despite its theoretical flaws.
While the CCC was unfolding in the pages of theoretical economics journals, the landscape of applied economics was changing substantially. During the 20-odd-year period during which the CCC was raging, Neoclassical growth theory was becoming increasingly dominant with developments such as the Solow growth model (still reliant on the homogenous conceptualization of K), the formalization of general equilibrium theory by Arrow and Debreu15, the application of statistical analysis by Friedman and Schwartz16, and the development of rational expectations theory17 in future-oriented financial modeling. The quantitative revolution had well and truly arrived in the world of economics, and with its position cemented, the field turned away from traditional economic measures, such as the gold standard, and towards newly available and increasingly financialized concepts which required more rigorous quantitative and computational approaches, such as modeling currency fluctuations, interest rates and the rise of speculative finance.18
This turn towards predictive, quantified, computational modelling became associated with the financialization of economics in the 1970s and 1980s. In 1973 a new equation, the Black-Scholes equation (no I won’t make you read it) revolutionized financial markets by optimizing and improving option pricing, and importantly, by opening the door for purely mathematical predictive asset pricing.19 This financialization gave rise to an influx of theoretical mathematicians, engineers and other mathematically – but not economically – trained young professionals into careers in finance, greatly impacting how economic theory related to the day-to-day practices of bankers, hedge fund managers and traders.20 It just simply did not matter that some abstract, theoretical argument had conclusively proved that the Neoclassical approach was riddled with logical inconsistencies and empirical fallacies – it worked well enough to turn a profit and the math it employed to do so was easily scaled and modified to bring about high-frequency trading and big money. The study of labor economics or industrial production fell to the wayside as more and more students flooded universities seeking to understand computational finance or algorithmic trading. Economic theory was no longer the philosophical, discursive, deductive, historically entrenched study it had been. It had become dominated by models optimized for financial applications.
The refuted Neoclassical approach lived on in lockstep with the increasingly empirically adequate and pragmatic theories of financial markets that took over our global economic system through the end of the 20th century into the 21st. Returns were skyrocketing, pockets were fat and any idea that the very foundation of this system was resting on already crumbled footing was far from anyone’s mind, that is of course, until it all came crashing down.21

- Cohen, Avi J., and Geoffrey C. Harcourt. “Retrospectives whatever happened to the Cambridge capital theory controversies?.” Journal of Economic Perspectives 17, no. 1 (2003): 199-214. ↩︎
- Neoclassical economics is an umbrella term used to refer to various schools of economic thought that generally share a core set of assumptions about the role of rationality, utility and information in economic modeling and decision making. It is often presented as an extension of classical economics derived from Adam Smith and David Ricardo. For a general overview of Neoclassicism, see Charles, Victoria. ”Neoclassicism.” Parkstone Press International. (2019). ↩︎
- In economics, elasticities measure relative change between dependent variables. ↩︎
- Marginal productivity theory puts forth the notion that rational wage rates are determined by the contribution a worker makes to the return on capital for a given company. Essentially, it posits that it would be irrational to pay a worker more than that worker adds in value to the company. In order to calculate this marginal product of labor (MPL), one needs to be able to calculate the marginal product of capital (MPK). If I cannot determine K, then the problem is obvious. ↩︎
- Robinson, Joan. “The Production Function and the Theory of Capital.” The Review of Economic Studies (1953-1954): 81-106. ↩︎
- Sraffa, Piero. “Production of commodities by means of commodities.” In What are the Questions and Other Essays, pp. 144-150. Routledge, 2016. ↩︎
- Solow, Robert M. “A model of balanced growth.” Quarterly Journal of Economics 70, no. 1 (1956): 65-94. ↩︎
- Pasinetti, Luigi L. “Paradoxes in Capital Theory : A Symposium: Changes in the Rate of Profit and Switches of Techniques.” The Quarterly Journal of Economics (1966): 503-517. ↩︎
- Samuelson, Paul. “A Summing Up.” The Quarterly Journal of Economics, (1966): 568-583. ↩︎
- Garegnani, Pierangelo. “Heterogeneous Capital, the Production Function and the Theory of Distribution.” The Review of Economic Studies, (1970): 407-436. ↩︎
- For an overview of how economics changed in the 20th century, see Blyth, Mark. ”Great Transformations: Economic Ideas and Institutional Change in the Twentieth Century.’ Cambridge University Press, (2002). ↩︎
- Samuealson, Paul. “Foundations of Economic Analysis.” Harvard University Press, 1947. ↩︎
- For an overview of the relationship between theoretical economics and WWII, see Guglielmo, Mark. “The contribution of economists to military intelligence during World War II.” The Journal of Economic History 68.1 (2008): 109-150. For an overview of how optimization in general was impacted by WWII, see Fortun, M & Schweber, SS. “Scientists and the Legacy of World War II: The Case of Operations Research (OR).” Social Studies of Science, 23(4), (1993): 595-642. ↩︎
- Ohanian, Lee E. “The macroeconomic effects of war finance in the United States: World War II and the Korean War.” The American Economic Review (1997): 23-40. ↩︎
- Geanakoplos, John. “Arrow-Debreu model of general equilibrium.” In General Equilibrium, pp. 43-61. London: Palgrave Macmillan UK, 1989. ↩︎
- Friedman, Milton, and Anna J. Schwartz. “Alternative approaches to analyzing economic data.” The American Economic Review (1991): 39-49. ↩︎
- McCallum, Bennett T. “The significance of rational expectations theory.” Challenge 22, no. 6 (1980): 37-43. ↩︎
- For a more detailed reflection on the transformations that took place within theoretical economics during the postwar period see Kuznets, Simon. ”Postwar Economic Growth: Four Lectures.” Harvard University Press, (1964). ↩︎
- Jensen, Michael C., Fischer Black, and Myron S. Scholes. “The capital asset pricing model: Some empirical tests.” (1972). ↩︎
- Embrechts, Paul. “The Wizards of Wall Street: did mathematics change finance?.” Nieuw Archief voor Wiskunde 4, no. 1 (2003): 26-33. ↩︎
- https://en.wikipedia.org/wiki/2008_financial_crisis ↩︎
Maura Cassidy Burke is a PhD Candidate in the philosophy of science at Utrecht University, where she focuses primarily on the epistemology and metaphysics of cosmology. In addition to her work at Utrecht University, she is also a founding member and board member of the Center of Trial and Error, and the Journal of Trial and Error. For the past two years, Maura has served as one of the managing editors of Shells & Pebbles.
Edited By: Luca Forgiarini, Elian Schure