Working papers results
ex-ante optimality requires intergenerational risk sharing. We compare the level
of time-consistent intergenerational risk sharing chosen by a social planner and by office
seeking politicians. In the political setting, the transfer of resources across generations
- a PAYG pension system - is determined as a Markov equilibrium of a probabilistic
voting game. Negative shocks represented by low realized returns on the risky asset
induce politicians to compensate the old through a PAYG system. Unless the young are
crucial to win the election, this political system generates more intergenerational risk
sharing than the (time consistent) social optimum. In particular, these transfers are
more persistent and less responsive to the realization of the shock than optimal. This is
because politicians anticipate their current transfers to the elderly to be compensated
through offsetting transfers by future politicians, and thus have an incentive to overspend.
Perhaps surprisingly, aging increases the socially optimal transfer but makes
politicians less likely to overspend, by making it more costly for future politicians to
compensate the current young.
We develop and estimate a medium scale macroeconomic model that allows for unemployment
and staggered nominal wage contracting. In contrast to most existing quantitative models,
employment adjustment is on the extensive margain and the employment of existing workers is
efficient. Wage rigidity, however, affects the hiring of new workers. The former is introduced
via the staggered Nash bargaing setup of Gertler and Trigari (2006). A robust finding is that
the model with wage rigidity provides a better description of the data than does a flexible wage
version. Overall, the model fits the data roughly as well as existing quantitative macroeconomic
models, such as Smets and Wouters (2007) or Christiano, Eichenbaum and Evans (2005). More
work is necessary, however, to ensure a robust identification of the key labor market parameters.
and productivity growth in the Italian manufacturing industries in 1995-2003.
Our results indicate that the off-shoring of intermediates within the same
industry (narrow off-shoring) is beneficial for productivity growth, while
the off-shoring of services is not. We also find that the way in which off-
shoring is measured may matter considerably. The positive relation between off-
shoring of intermediates and productivity growth is there with our direct
measures based on input-output data but disappears when either a broad measure
or the Feenstra-Hanson off-shoring measure employed in other studies are used
instead.
their preferences concerning an irreversible social decision. Voters can either implement
the project in the first period, or they can postpone the decision to the
second period. We analyze the effects of different majority rules. Individual first
period voting behavior may become "less conservative" under supermajority rules,
and it is even possible that a project is implemented in the first period under a
supermajority rule that would not be implemented under simple majority rule.
We characterize the optimal majority rule, which is a supermajority rule. In
contrast to individual investment problems, society may be better off if the option
to postpone the decision did not exist. These results are qualitatively robust to
natural generalizations of our model.
If successful, the innovative effort allows to take new actions that may be ex-post wel-
fare enhancing (legal) or decreasing (illegal). Deterrence in this setting works by affecting
the incentives to invest in innovation (average deterrence). Type-I errors, through over-
enforcement, discourage innovative effort while type-II errors (under-enforcement) spur it.
The ex-ante expected welfare effect of innovations shapes the optimal policy design. When
innovations are ex-ante welfare improving, laissez-faire is chosen. When innovations are
instead welfare decreasing, law enforcement should limit them through average deterrence.
We consider several policy environments differing in the instruments available. Enforcement
effort is always positive and fines are (weakly) increasing in the social loss of innovations. In
some cases accuracy is not implemented, contrary to the traditional model where it always
enhances (marginal) deterrence, while in others it is improved selectively only on type-II
errors (asymmetric protocols of investigation).
cointegration and dynamic factor models. It introduces the Factor-augmented Error
Correction Model (FECM), where the factors estimated from a large set of variables in levels
are jointly modelled with a few key economic variables of interest. With respect to the standard
ECM, the FECM protects, at least in part, from omitted variable bias and the dependence of
cointegration analysis on the specific limited set of variables under analysis. It may also be in
some cases a refinement of the standard Dynamic Factor Model (DFM), since it allows us to
include the error correction terms into the equations, and by allowing for cointegration prevent
the errors from being non-invertible moving average processes. In addition, the FECM is a
natural generalization of factor augmented VARs (FAVAR) considered by Bernanke, Boivin and
Eliasz (2005) inter alia, which are specified in first differences and are therefore misspecified in
the presence of cointegration. The FECM has a vast range of applicability. A set of Monte Carlo
experiments and two detailed empirical examples highlight its merits in finite samples relative to
standard ECM and FAVAR models. The analysis is conducted primarily within an in-sample
framework, although the out-of-sample implications are also explored.
diffusion index-based methods in short samples with structural change. We
consider several data generation processes, to mimic different types of
structural change, and compare the relative forecasting performance of factor
models and more traditional time series methods. We find that changes in the
loading structure of the factors into the variables of interest are extremely
important in determining the performance of factor models. We complement
the analysis with an empirical evaluation of forecasts for the key
macroeconomic variables of the Euro area and Slovenia, for which relatively
short samples are officially available and structural changes are likely. The
results are coherent with the findings of the simulation exercise, and confirm
the relatively good performance of factor-based forecasts also in short samples
with structural change.
models that can handle unbalanced datasets. Due to the different release lags of business cycle
indicators, data unbalancedness often emerges at the end of multivariate samples, which is some-
times referred to as the 'ragged edge' of the data. Using a large monthly dataset of the German
economy, we compare the performance of different factor models in the presence of the ragged edge:
static and dynamic principal components based on realigned data, the Expectation-Maximisation
(EM) algorithm and the Kalman smoother in a state-space model context. The monthly factors
are used to estimate current quarter GDP, called the 'nowcast', using different versions of what
we call factor-based mixed-data sampling (Factor-MIDAS) approaches. We compare all possible
combinations of factor estimation methods and Factor-MIDAS projections with respect to now-
cast performance. Additionally, we compare the performance of the nowcast factor models with
the performance of quarterly factor models based on time-aggregated and thus balanced data,
which neglect the most timely observations of business cycle indicators at the end of the sample.
Our empirical findings show that the factor estimation methods don't differ much with respect
to nowcasting accuracy. Concerning the projections, the most parsimonious MIDAS projection
performs best overall. Finally, quarterly models are in general outperformed by the nowcast factor
models that can exploit ragged-edge data.