PI: Felix Kübler (University of Zurich)

Co-PIs: Simon Scheidegger, Olaf Schenk

July 1, 2017 - June 30, 2020

Project Summary

The computation of equilibria in dynamic stochastic general equilibrium models with heterogeneous agents has become increasingly important in financial economics, macroeconomics and in public finance. It is an open question of great practical relevance how to compute global solutions for large scale problems such as in models with overlapping generations and aggregate uncertainty which should be used to evaluate social security reform proposals, in new Keynesian DSGE models with several sectors, and monetary and fiscal policy, or in models with infinitely lived agents and collateral constraints which could be used to evaluate proposals on the regulation of financial markets. Smaller and more stylized versions of these models have been solved in the literature (see e.g. Krueger and Kubler (2006) or Brumm et al. (2014)) but existing technologies and codes cannot be scaled to tackle models that are realistic enough so that they can match basic facts observed in data.

Building on earlier work undertaken by us (see, Brumm et al. (2016)), want to consider an Aiyagari- Bewley-style (see, e.g., Bewley (1984)) overlapping generations (OLG) model with incomplete financial markets. The latter is a discrete time infinite horizon model where a continuum of ex ante identical agents enters the economy each period. These agents consume and invest for the following ~80 periods (representing their life expectancy). In each period, the economy faces aggregate shocks that affect productivity, and each agent faces an idiosyncratic shock that determines his ability to work (his labor endowment). In this project we plan to extent this model to allow for several types of agents, this could represent several countries or agents with different education and tastes within a country. As a concrete application, we want to explore global inequalities and migration. For this, we plan to build and calibrate a multi-country stochastic OLG models with 4-5 regions (countries) and use such model economy to understand the effects of country-specific productivity shocks and demographics on migration and global inequality. These are very timely questions, as for example the transatlantic free trade agreement (TTIP) as well as possible redistribution mechanisms for wealth from the to ultra high net worth individuals to the rest of the population are currently under heavily discussion. We hope that the findings resulting from the proposed work here can provide policy makers with some quantitative guidance.

The computational complexity of the underlying problem however was so far a prohibitive obstacle for any researcher in economics. The models we propose to solve are of order ~300 continuous dimensions and 32 discrete states that represent possible realizations of the world. Naive, grid based solution methods consequently suffer the curse of dimensionality (Bellman (1961)). Brumm and Scheidegger (2014) introduced adaptive sparse grids in context of dynamic stochastic economic models. By embedding an adaptive sparse grid algorithm into a time-iteration procedure, they were able to solve up to hundreddimensional international real business cycle models. This strongly contrasted previous economic modeling where researchers were capable to deal with up to twenty-dimensional models. In order to accelerate these time-consuming computations, Brumm et al. (2015) then extended this framework to contemporary high-performance compute architectures. By exploiting the generic structure of recursive economic problems, they proposed a parallelization scheme that favors hybrid massively parallel computer architectures. The developments of their work included the use of an adaptive sparse grid algorithm and the use of a mixed MPI Intel TBB - CUDA/Thrust implementation to improve the interprocess communication strategy on massively parallel architectures. Recently, Brumm et al. (2016) and Scheidegger et al. (2016) upgraded this framework to solve annually calibrated OLG models with 59 dimensions and 4 discrete states. Moreover, the code base was also ported to Intel Xeon Phi Knights- Landing hardware, including the capability of using AVX-512.

Solving many-agent OLG models as proposed here is currently intractable with our massively parallelized adaptive sparse grid code base. In order to solve economic models of the size required, we started to develop a hybrid parallelized (mixed MPI and Intel TBB) scheme that merges adaptive sparse grids with high-dimensional model reduction techniques (see, e.g. Rabitz & Alis (1999), Ma & Zabaras (2010)). The latter combination of techniques guarantees that computational burden increases in the best case only linearly with increasing dimensions. In preliminary results, Eftekhari et al. (2016) were capable of solving simple IRBC models of up to 300 continuous dimensions.

In addition, we also started follow an alternative route to track high-dimensional economic models, i.e., we built an MPI parallel dynamic programming framework that merges Gaussian Processes (GPs) with active subspace methods (Scheidegger & Bilionis (2016 – in preparation)) and was shown to be capable to solve up to 200-dimensional growth models. The paramount effort of our future research on algorithms to solve many-agent OLG models has therefore to focus on merging our massively parallel OLG framework (that can handle discrete states) with the newly developed high-dimensional model reduction scheme as well as to port the GP framework to OLG models. To this end, our research will pursue tow streams:

  • Ia) In a first step, the current code base–as described in Brumm et al. (2014), Scheidegger et al. (2016), Eftekhari et al. (2016)–needs to be enhanced algorithmically in order to shift the size of models that we can solve in reasonable timescales. Such code-update will mainly consist of building a flexible but very complex, nested MPI topology that can handle discrete states and many (possible hundreds) of adaptive sparse grids per state at once within a single time step. Moreover, our efforts will need to be combined with attempts to vectorize our emerging data structures along the lines of Heinecke & Pflüger (2013), Brumm et al. (2015), Scheidegger et al. (2016) such that this new framework will be capable of harvesting the emerging power of heterogeneous node level parallelism (CPU/GPU & CPU/MIC nodes).
  • Ib) Implementation of the very high-dimensional many-agent OLG model.
  • IIa) Porting the MPI-parallel GP active subspace time-iteration/dynamic programming framework to heterogeneous HPC platforms (the steps will be similar as the once outlined in I).
  • IIb) Implementation of high-dimensional many-agent OLG models inside the Gaussian Process (GP) active subspace framework. A particular novelty that comes “for free” when using GPs is that they allow for direct uncertainty quantification and propagation in order to test the robustness of the results – a stream of research that has (apart from Cai et al. (2015), who resolved this matter by performing a huge parameter study) has been neglected in the computational economics community so far.

Note that items I and II are not related, ensuring some sort of hedge if one research stream would turn out to be less promising than expected.

Finally, note that in most natural sciences computational research—and in particular high performance computing (HPC)—has become a strong, well-established third pillar, alongside with theory and experimentation. In some of these fields HPC systems can probably be considered as the most powerful and flexible research instruments available today and allow researchers to ask questions that would otherwise be impossible to address. In contrast, there is almost no work in economics that takes advantage of high-performance computing. This might seem surprising given that many economic models display highly non-linear dynamics and are very difficult to solve numerically and given that the HPC community has been reaching out actively to non-traditional fields over the last couple of years. One possible reasons for this fact is that economists do not have the skills to perform high-performance computations and do not have large enough research-groups to hire computer scientists to help them (as it is done routinely in the natural sciences).

To this end, one aim of our PASC project is to bring high-performance computing to economics by disseminating parts of our to be developed framework (which is written in C++ and Fortran) in forms that make it easily accessible to users, e.g. by building interfaces to Python and Julia.