Working papers
IGIER fellows and affiliates publish books and articles in academic journals. Their current research projects are featured in the Working Paper series.
on sellers' investment. We show that a retailer extracts a larger
surplus from the negotiation with an upstream manufacturer the
more it is essential to the creation of total surplus. In turn, this
depends on the rivalry between retailers in the bargaining process.
Rivalry increases when the retail market is more fragmented, when
the retailers are less differentiated and when decreasing returns to
scale in production are larger. The allocation of total surplus affects
also the incentives of producers to invest in product quality, an instance
of the hold up problem. This not only makes both the supplier and
consumers worse off, but it may harm also the retailers.
Kewords: Retailers' power, Hold-up, Supplier's under-investment
Our paper seeks to answer this question by providing evidence on the
age-productivity and age-earnings profiles for a sample of plants in three
manufacturing industries (forest, industrial machinery and electronics) in
Finland. Our main result is that exposure to rapid technological and managerial
changes does make a difference for plant productivity, less so for wages. In
electronics, the Finnish industry undergoing a major technological and
managerial shock in the 1990s, the response of productivity to age-related
variables is first sizably positive and then becomes sizably negative as one
looks at plants with higher average seniority and experience. This declining
part of the curve is not there either for the forest industry or for industrial
machinery. It is not there either for wages in electronics. These conclusions
survive when a host of other plausible productivity determinants (notably,
education and plant vintage) are included in the analysis. We conclude that
workforce aging may be a burden for firms in high-tech industries and less so in
other industries.
We study the joint dynamics of economic and political change. Predictions of the simple model that we formulate in the paper get
considerable support in a panel of data on political regimes and GDP per capita for about 150 countries over 150 years. Democratic cap-
ital - measured by a nation's historical experience with democracy and by the incidence of democracy in its neighborhood - reduces the
exit rate from democracy and raises the exit rate from autocracy. In democracies, a higher stock of democratic capital stimulates growth
in an indirect way by decreasing the probability of a sucessful coup. Our results suggest a virtuous circle, where the accumulation of phys-
ical and democratic capital reinforce each other, promoting economic development jointly with the consolidation of democracy.
model misspecification. The principal tools used to solve robust control problems
are state-space methods (see Hansen and Sargent, 2006, and Giordani and
Soderlind, 2004). In this paper we show that the structural-form methods
developed by Dennis (2006) to solve control problems with rational expectations
can also be applied to robust control problems, with the advantage that they
bypass the task, often onerous, of having to express the reference model in
statespace form. Interestingly, because state-space forms and structural forms
are not unique the two approaches do not necessarily return the same equilibria
for robust control problems. We apply both state-space and structural solution
methods to an empirical New Keynesian business cycle model and find that the
differences between the methods are both qualitatively and quantitatively important.
In particular, with the structural-form solution methods the specification errors generally
involve changes to the conditional variances in addition to theconditional means of the
shock processes.
is attracting considerable attention. In this paper we briefly review the underlying
theory and then compare the impulse response functions resulting from two alternative
estimation methods for the DFM. Finally, as an example, we reconsider the issue of
the identification of the driving forces of the US economy, using data for about 150
macroeconomic variables.
considerable attention recently, due to the increased availability of large datasets. In
this paper we propose a new parametric methodology for estimating factors from large
datasets based on state space models and discuss its theoretical properties. In particular,
we show that it is possible to estimate consistently the factor space. We also
develop a consistent information criterion for the determination of the number of factors
to be included in the model. Finally, we conduct a set of simulation experiments
that show that our approach compares well with existing alternatives.
integrates labor market search and matching into an otherwise
standard New Keynesian model. I allow for changes of the labor
input at both the extensive and the intensive margin and develop
two alternative specifications of the bargaining process. Under
efficient bargaining (EB) hours are determined jointly by the firm
and the worker as a part of the same Nash bargain that determines
wages. With right to manage (RTM), instead, firms retain the right to
set hours of work unilaterally. I show that introducing search and
matching frictions affects the cyclical behavior of real marginal costs
by way of two different channels: a wage channel under RTM and an
extensive margin channel under EB. In both cases, the presence of
search and matching frictions may cause a lower elasticity of marginal
costs with respect to output and thus help to account for the observed
inertia in inflation.
parameter estimation and model evaluation whenthe objective function measures
the distance between estimated and model impulse responses. We show that
observational equivalence, partial and weak identification problems are widespread,that
they lead to biased estimates, unreliable t-statistics and may induce investigators to
select false models. We examine whether different objective functions affect identification
and study how small samples interact with parameters and shock identification.
We provide diagnostics and tests to detect identification failures and apply them to a
state-of-the-art model.
attempts to addresses this question that exploited within-country variation.
It shows that the answer is largely positive, but also depends on the details
of democratic reforms. First, the sequence of economic vs political reforms
matters: countries liberalizing their economy before extending political rights
do better. Second, different forms of democratic government lead to different
economic policies, and this might explain why presidential democracy leads
to faster growth than parliamentary democracy. Third, it is important to distinguish
between expected and actual political reforms. Taking expectations of regime
change into account helps identify a stronger growth effect of democracy.
The Italian economy is often said to be on a declining path. In this paper, we document that:
(i) Italy�s current decline is a labor productivity problem (ii) the labor productivity slowdown
stems from declining productivity growth in all industries but utilities (with manufacturing
contributing for about one half of the reduction) and diminished inter-industry reallocation of
workers from agriculture to market services; (iii) the labor productivity slowdown has been
mostly driven by declining TFP, with roughly unchanged capital deepening. The only mild
decline of capital deepening is due to the rise in the value added share of capital that
counteracted declining capital accumulation.
We lay out a tractable model for fiscal and monetary policy analysis in
a currency union, and analyze its implications for the optimal design of such
policies. Monetary policy is conducted by a common central bank, which sets
the interest rate for the union as a whole. Fiscal policy is implemented at
the country level, through the choice of government spending level. The model
incorporates country-specific shocks and nominal rigidities. Under our assumptions,
the optimal monetary policy requires that inflation be stabilized at the
union level. On the other hand, the relinquishment of an independent monetary
policy, coupled with nominal price rigidities, generates a stabilization role
for fiscal policy, one beyond the efficient provision of public goods. Interestingly,
the stabilizing role for fiscal policy is shown to be desirable not only from
the viewpoint of each individual country, but also from that of the union as
a whole. In addition, our paper offers some insights on two aspects of policy
design in currency unions: the conditions for equilibrium determinacy and
the effects of exogenous government spending variations.
Pooling forecasts obtained from different procedures typically reduces
the mean square forecast error and more generally improves the quality
of the forecast. In this paper we evaluate whether pooling interpolated
or backdated time series obtained from different procedures can also
improve the quality of the generated data. Both simulation results
and empirical analyses with macroeconomic time series indicate that
pooling plays a positive and important role also in this context.
In this paper we assess the possibility of producing unbiased forecasts for fiscal variables in the
euro area by comparing a set of procedures that rely on different information sets and
econometric techniques. In particular, we consider ARMA models, VARs, small scale semi-
structural models at the national and euro area level, institutional forecasts (OECD), and
pooling. Our small scale models are characterized by the joint modelling of fiscal and monetary
policy using simple rules, combined with equations for the evolution of all the relevant
fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on
the basis of their forecasting performance using the mean square and mean absolute error
criteria at different horizons. Overall, simple time series methods and pooling work well and are
able to deliver unbiased forecasts, or slightly upward biased forecast for the debt-GDP
dynamics. This result is mostly due to the short sample available, the robustness of simple
methods to structural breaks, and to the difficulty of modelling the joint behaviour of several
variables in a period of substantial institutional and economic changes. A bootstrap experiment
highlights that, even when the data are generated using the estimated small scale multi
country model, simple time series models can produce more accurate forecasts, due to
their parsimonious specification.
Many countries, especially developing ones, follow procyclical fiscal polices, namely spending goes up (taxes go down) in booms and spending goes down (taxes go up) in recessions. We provide an explanation for this suboptimal fiscal policy based upon political distortions and incentives for less-than-benevolent government to appropriate rents. Voters have incentives similar to the starving the Leviathan classic
argument, and demand more public goods or fewer taxes to prevent governments from appropriating rents when the economy is doing well.
We test this argument against more traditional explanations based purely on borrowing constraints, with a reasonable amount of success.
Do countries gain by coordinating their monetary policies if they have different economic structures? We address this issue in the context of a new open-economy macro model with a traded and a non-traded sector and more importantly, with a across-country asymmetry in the size of the traded sector. We study optimal monetary policy under independent and cooperating central banks, based on analytical expressions for welfare objectives derived from quadratic approximations to individual preferences. In the presence of asymmetric structures, a new source of gains from coordination emerges due to a terms-of-trade externality. This externality affects unfavorably the country that is more exposed to trade and its effects tend
to be overlooked when national central banks act independently. The welfare gains from coordination are sizable and increase with the degree of asymmetry across countries and the degree of openness, and decrease with the within-country correlation of sectoral shocks.
We study whether fiscal restrictions affect volatilities and correlations of macrovariables
and the probability of excessive debt for a sample of 48 US states. Fiscal constraints are
characterized with a number of indicators and volatility and correlations are computed in several
ways. The second moments of macroeconomic variables in states with different fiscal constraints
are economically and statistically similar. Excessive debt and the mechanism linking budget
deficit and excessive debts are independent of whether tight or loose fiscal constraints are in
place. Creative budget accounting may account for the results.
We study how constrained fiscal policy can affect regional inflation and output in a two-region model of a monetary union with sticky prices and distortionary taxation. Both government expenditure and taxes can be used to stabilize regional variables; however, the best welfare outcome is obtained under constant taxes and constant regional inflations. With cooperation debt and deficit constraints reduce regional inflation variability, but the path of output is suboptimal. Under non-cooperation the opposite occurs due to a trade-off between taxation and inflation variability. Decentralized rules, rather than constraints, stabilize regional inflation and output. They imply more fiscal action for smaller union members.
We study the mechanics of transmission of fiscal shocks to labor markets. We
characterize a set of robust implications following government consumption, investment
and employment shocks in a RBC and a New-Keynesian model and use part of them to
identify shocks in the data. In line with the New-Keynesian story, shocks to government
consumption and investment increase real wages and employment contemporaneously
both in US aggregate and in US state data. The dynamics in response to employment
shocks are mixed, but in many cases are inconsistent with the predictions of the RBC
model.
Does culture have a causal effect on economic development? The data on European
regions suggest that it does. Culture is measured by indicators of individual values
and beliefs, such as trust and respect for others, and confidence in individual selfdetermination.
To isolate the exogenous variation in culture, I rely on two historical
variables used as instruments: the literacy rate at the end of the XIXth century, and
the political institutions in place over the past several centuries. The political and
social history of Europe provides a rich source of variation in these two variables at a
regional level. The exogenous component of culture due to history is strongly
correlated with current regional economic development, after controlling for
contemporaneous education, urbanization rates around 1850 and national effects.
Moreover, the data do not reject the over-identifying assumption that the two
historical variables used as instruments only influence regional development through
culture. The indicators of culture used in this paper are also strongly correlated with
economic development and with available measures of institutions in a cross-country
setting.
Consumption is striking back. Some recent evidence indicates that
the well-known asset pricing puzzles generated by the difficulties of
matching fluctuations in asset prices with high frequency fluctuations
in consumption might be solved found by considering consumption in
the long-run. A first strand of the literature concentrates on multiperiod
differences in log consumption, a second concentrates on the
cointegrating relation for consumption. Interestingly, only the (multiperiod)
Euler Equation for the consumer optimization problem is
considered by the first strand of the literature, while the cointegrationbased
literature concentrates exclusively on the (linearized) intertemporal
budget constraint. In this paper, we show that using the first
order condition in the linearized budget constraint to derive an explicit
long-run consumption function delivers an even more striking
strike back.
This paper studies how a central bank's preference for robustness against
model misspecification affects the design of monetary policy in a New-Keynesian
model of a small open economy. Due to the simple model structure,
we are able to solve analytically for the optimal robust policy rule, and we
separately analyze the effects of robustness against misspecification concerning
the determination of inflation, output and the exchange rate. We show that
an increased central bank preference for robustness makes monetary policy
respond more aggressively or more cautiously to shocks, depending on the
type of shock and the source of misspecification.
This paper introduces underground activities and tax evasion into a one sector dynamic general equilibrium model with external effects. The model presents a novel mechanism driving the self-fulfilling prophecies, which is triggered by the reallocation of resources to the underground sector to avoid the excess tax burden. This mechanism differs from the customary one, and it is complementary to it. In addition, the explicit introduction of an (even tiny) underground sector allows to reduce aggregate degree of increasing returns required for indeterminacy, and for having well behaved input demand schedules (in the sense they slope down).
Journal of Economic Literature Classification Numbers: O40, E260
A central problem for the game theoretic analysis of voting is that voting games
have very many Nash equilibria. In this paper, we consider a new refinement
concept for voting games that combines two ideas that appear reasonable for voting
games: First, trembling hand perfection (voters sometimes make mistakes when
casting their vote) and second, coordination of voters with similar interests. We
apply this refinement to an analysis of multicandidate elections under plurality rule
and runoff rule.
For plurality rule, we show that our refinement implies Duverger's law: In all
equilibria, (at most) two candidates receive a positive number of votes. For the case
of 3 candidates, we can completely characterize the set of equilibria. Often, there
exists a unique equilibrium satisfying our refinement; surprisingly, this is even true,
if there is no Condorcet winner. We also consider the equilibria under a runoff rule
and analyze when plurality rule and runoff rule yield different outcomes.
Building on recent work on dynamic interactive epistemology, we
extend the analysis of extensive-form psychological games (Geneakoplos,
Pearce & Stacchetti, Games and Economic Behavior, 1989) to
include conditional higher-order beliefs and enlarged domains of pay-off
functions. The approach allows modeling dynamic psychological
effects (such as sequential reciprocity, psychological forward induction,
and regret) that are ruled out when epistemic types are identified with
hierarchies of initial beliefs. We define a notion of psycholigical sequential
equilibrium, which generalizes the sequential equilibrium notion for
traditional games, for which we prove existence under mild assumptions.
Our framework also allows us to directly formulate assumptions about
"dynamic" rationality and interactive beliefs in order to explore strategic
interaction without assuming that players' beliefs are coordinated on an
equilibrium. In particular, we provide an exploration of (extensive-form)
rationalizability in psychological games.
We provide a summary updated guide for the construction, use and evaluation of
leading indicators, and an assessment of the most relevant recent developments in this
field of economic forecasting. To begin with, we analyze the problem of selecting a
target coincident variable for the leading indicators, which requires coincident indicator
selection, construction of composite coincident indexes, choice of filtering methods,
and business cycle dating procedures to transform the continous target into a binary
expansion/recession indicator. Next, we deal with criteria for choosing good leading
indicators, and simple non-model based methods to combine them into composite indexes.
Then, we examine models and methods to transform the leading indicators into
forecasts of the target variable. Finally, we consider the evaluation of the resulting
leading indicator based forecasts, and review the recent literature on the forecasting
performance of leading indicators.
Abstract
Iterated multiperiod ahead time series forecasts are made using a one-period ahead model, iterated forward for the desired number of periods, whereas direct forecasts are made using a horizon-specific estimated model, where the dependent variable is the multi-period ahead value being forecasted. Which approach is better is an empirical matter: in theory, iterated forecasts are more efficient if correctly specified, but direct forecasts are more robust to model misspecification. This paper compares empirical iterated and direct forecasts from linear univariate and bivariate models by applying simulated out-of-sample methods to 171 U.S. monthly macroeconomic time series spanning 1959 - 2002. The iterated forecasts typically outperform the direct forecasts, particularly if the models can select long lag specifications. The relative performance of the iterated forecasts improves with the forecast horizon.
We analyse the panel of the Greenbook forecasts (sample 1970-1996) and a
large panel of monthly variables for the US (sample 1970-2003) and show that
the bulk of dynamics of both the variables and their forecasts is explained by two
shocks. Moreover, a two factor model which exploits, in real time, information
on many time series to extract a two dimensional signal, produces a degree of
forecasting accuracy of the federal funds rate similar to that of the markets, and,
for output and inflation, similar to that of the Greenbook forecasts. This leads us
to conclude that the stochastic dimension of the US economy is two. We also show
that dimension two is generated by a real and nominal shock, with output mainly
driven by the real shock and inflation by the nominal shock. The implication is
that, by tracking any forecastable measure of real activity and price dynamics, the
Central Bank can track all fundamental dynamics in the economy.
How does the relationship between an investor and entrepreneur depend on the legal
system? In a double moral hazard framework, we show how optimal contracts,
corporate governance, and investor actions depend on the legal system. With better
legal protection, investors give more non-contractible support, demand more downside
protection, and exercise more governance. Investors in better legal systems develop
stronger governance and support competencies. Therefore, when investing in a different
legal systems they behave differently than local investors. We test these predictions
using a hand-collected dataset of European venture capital deals. The empirical
results confirm the predictions of the model.
Abstract
We employ Markov-switching regression methods to estimate fiscal policy feedback rules
in the U.S. for the period 1960-2002. Our approach allows to capture policy regime changes
endogenously. We reach three main conclusions. First, fiscal policy may be characterized,
according to Leeper (1991) terminology, as active from the 1960s throughout the 1980s, switching
gradually to passive in the early 1990s and switching back to active in early 2001. Second,
regime-switching fiscal rules are capable of tracking the time-series behaviour of the U.S. primary
deficit better than rules based on a constant parameter specification. Third, regime-switches in
monetary and fiscal policy rules do not exhibit any degree of synchronization. Our results are
at odds with the view that the post-war U.S. fiscal policy regime may be classified as passive at
all times, and seem to pose a challenge for the specification of the correct monetary-fiscal mix
within recent optimizing macroeconomic models considered suitable for policy analysis.
We explore the determinants of yield differentials between sovereign bonds in the Euro
area. There is a common trend in yield differentials, which is correlated with a measure
of the international risk factor. In contrast, liquidity differentials display sizeable heterogeneity
and no common factor. We present a model that predicts that yield differentials
should increase in both liquidity and risk, with an interaction term whose magnitude and
sign depends on the size of the liquidity differential with respect to the reference country.
Testing these predictions on daily data, we find that the international risk factor is consistently
priced, while liquidity differentials are priced only for a subset of countries and
their interaction with the risk factor is crucial to detect their effect.
This paper brings together two strands of the empirical macro literature:
the reduced-form evidence that the yield spread helps in forecasting output
and the structural evidence on the difficulties of estimating the effect of monetary
policy on output in an intertemporal Euler equation. We show that
including a short-term interest rate and inflation in the forecasting equation
improves the forecasting performance of the spread for future output but the
coefficients on the short rate and inflation are difficult to interpret using a
standard macroeconomic framework. A decomposition of the yield spread
into an expectations-related component and a term premium allows a better
understanding of the forecasting model. In fact, the best forecasting model for
output is obtained by considering the term premium, the short-term interest
rate and inflation as predictors. We provide a possible structural interpretation
of these results by allowing for time-varying risk aversion, linearly related
to our estimate of the term premium, in an intertemporal Euler equation for
output.
We study optimal monetary policy in two prototype economies with sticky prices and credit
market frictions. In the first economy, credit frictions apply to the financing of the capital stock,
generate acceleration in response to shocks and the financial markup (i.e., the premium on
external funds) is countercyclical and negatively correlated with the asset price. In the second
economy, credit frictions apply to the flow of investment, generate persistence, and the financial
markup is procyclical and positively correlated with the asset price. We model monetary policy
in terms of welfare-maximizing interest rate rules. The main finding of our analysis is that strict
inflation stabilization is a robust optimal monetary policy prescription. The intuition is that, in
both models, credit frictions work in the direction of dampening the cyclical behavior of inflation
relative to its credit-frictionless level. Thus neither economy, despite yielding different inflation
and investment dynamics, generates a trade-off between price and financial markup stabilization.
A corollary of this result is that reacting to asset prices does not bear any independent welfare
role in the conduct of monetary policy.
We provide a long term perspective on the individual retirement behavior
and on the future of early retirement. In a cross-country sample, we
find that total pension spending depends positively on the degree of early
retirement and on the share of elderly in the population, which increase
the proportion of retirees, but has hardly any effect on the per-capita pension
benefits. We show that in a Markovian political economic theoretical
framework, in which incentives to retire early are embedded, a political
equilibrium is characterized by an increasing sequence of social security
contribution rates converging to a steady state and early retirement. Comparative
statics suggest that aging and productivity slow-downs lead to
higher taxes and more early retirement. However, when income effects
are factored in, the model suggests that periods of stagnation - characterized
by decreasing labor income - may lead middle aged individuals to
postpone retirement.
Using a structural Vector Autoregression approach, this paper compares the
macroeconomic effects of the three main government spending tools: government
investment, consumption, and transfers to households, both in terms of the size
and the speed of their effects on GDP and its components. Contrary to a common
opinion, there is no evidence that government investment shocks are more
effective than government consumption shocks in boosting GDP: this is true both
in the short and, perhaps more surprisingly, in the long run. In fact, government
investment appears to crowd out private investment, especially in dwelling and in
machinery and equipment. There is no evidence that government investment pays
for itself in the long run, as proponents of the Golden Rule implicitly or explicitly
argue. The positive effects of government consumption itself are rather limited,
and defense purchases have even smaller (or negative) effects on GDP and private
investment. There is also no evidence that government transfers are more effective
than government consumption in stimulating demand.
This paper studies the effects of fiscal policy on GDP, inflation and interest rates
in 5 OECD countries, using a structural Vector Autoregression approach. Its main
results can be summarized as follows: 1) The effects of fiscal policy on GDP tend
to be small: government spending multipliers larger than 1 can be estimated only
in the US in the pre-1980 period. 2) There is no evidence that tax cuts work faster
or more effectively than spending increases. 3) The effects of government spending
shocks and tax cuts on GDP and its components have become substantially weaker
over time; in the post-1980 period these effects are mostly negative, particularly on
private investment. 4) Only in the post-1980 period is there evidence of positive
effects of government spending on long interest rates. In fact, when the real interest
rate is held constant in the impulse responses, much of the decline in the response
of GDP in the post-1980 period in the US and UK disappears. 5) Under plausible
values of its price elasticity, government spending typically has small effects on
inflation. 6) Both the decline in the variance of the fiscal shocks and the change
in their transmission mechanism contribute to the decline in the variance of GDP
after 1980.
Focusing on signaling games, I illustrate the relevance of the rationalizability
approach for the analysis multistage games with incomplete
information. I define a class of iterative solution procedures, featuring a
notion of forward induction: the Receiver tries to explain the Sender's
message in a way which is consistent with the Sender's strategic sophistication
and certain given restrictions on beliefs. The approach is applied to
some numerical examples and economic models. In a standard model with
verifiable messages a full disclosure result is obtained. In a model of job
market signaling the best separating equilibrium emerges as the unique
rationalizable outcome only when the high and low types are sufficiently
different. Otherwise, rationalizability only puts bounds on the education
choices of different types.
This paper suggests that the main (and possibly unique) source of β- and σ- convergence
in GDP per worker (i.e. labor productivity) across Italian regions over the
1980-2000 period is the change in technical and allocative efficiency, i.e. convergence
in relative TFP levels. To reach this conclusion, I construct an approximation of
the production frontier at different points in time using Data Envelope Analysis
(DEA), and measure efficiency as the output-based distance from the frontier. This
method is entirely data-driven, and does not require the specification of any particular
functional form for technology. Changes in GDP per worker can be decomposed
in changes in relative efficiency, changes due to overall technological progress, and
changes due to capital deepening. My results suggest that: (i) differences in relative
TFP are quantitatively important; (ii) while technological progress and capital
deepening are the main, and equally important, forces behind the rightward shift
in the distribution of GDP per worker, convergence in relative TFP is the main
determinant of the change in the distribution's shape.
We study the effects of model uncertainty in a simple New-Keynesian
model using robust control techniques. Due to the simple model structure, we
are able to find closed-form solutions for the robust control problem, analyzing
both instrument rules and targeting rules under different timing assumptions.
In all cases but one, an increased preference for robustness makes monetary
policy respond more aggressively to cost shocks but leaves the response to
demand shocks unchanged. As a consequence, inflation is less volatile and
output is more volatile than under the non-robust policy. Under one particular
timing assumption, however, increasing the preference for robustness has no
effect on the optimal targeting rule (nor on the economy).
The existing studies of unemployment benefit and unemployment duration suggest that reforms
that lower either the level or the duration of benefits should reduce unemployment. Despite the
large number of such reforms implemented in Europe in the past decades, this paper presents
evidence that shows no correlation between the reforms and the evolution of unemployment.
This paper also provides an explanation for this fact by exploring the
interactions between unemployment benefits and social assistance programmes. Unemployed
workers who are also eligible, or expect to become eligible, for some social assistance
programmes are less concerned about their benefits being reduced or terminated. They will not
search particularly intensively around the time of benefit exhaustion nor will come particularly
less choosy about job offers by reducing their reservation wages. Data from the European
Community Household Panel (ECHP) are used to provide evidence to support this argument.
Results show that, in fact, for social assistance recipients the probability of finding a job is not
particularly higher during the last months of entitlement.
We document the presence of a trade-off between unemployment benefits (UB) and employment protection legislation (EPL) in the provision of insurance against labor market risk. Different countries' locations along this trade-off represent stable, hard to modify, politico-economic equilibria. We develop a model in which voters are required to cast a ballot over the strictness of EPL, the generosity of UBs and the amount of redistribution involved by the financing of unemployment insurance. Agents are heterogeneous along two dimensions: imployment status - insiders and outsiders - and skills - low and high. Unlike previous work on EPL, we model employment protection as an institution redistributing among insiders, notably in favour of the low-skill workers. A key implication of the model is that configurations with strict EPL and low UB should emerge in presence of compressed wage structures. Micro data on wage premia on educational attainments and on the strictness of EPL are in line with our results. We also find empirical support to the substantive assumptions of the model on the effects of EPL.
Abstract
We study how public policy can contribute to increase the share of early stage and
high-tech venture capital investments, thus helping the development of active venture
capital markets. A simple extension of the seminal model by Holmstrom and Tirole
(1997) provides a theoretical base for our analysis. We then explore a unique panel of
data for 14 European countries between 1988 and 2001. We have several novel findings.
First, the opening of stock markets targeted at entrepreneurial companies positively
affects the shares of early stage and high-tech venture capital investments; reductions
in capital gains tax rates have a similar, albeit weaker, effect. Second, a reduction in
labor regulation results in a higher share of high-tech investments. Finally, we find no
evidence of a shortage of supply of venture capital funds in Europe, and no evidence
of an effect of increased public R&D spending on the share of high-tech or early stage
venture capital investments.
Abstract
The aim of this paper is to propose a new method for forecasting Italian
inflation. We expand on a standard factor model framework (see Stock and
Watson (1998)) along several dimensions. To start with we pay special
attention to the modeling of the autoregressive component of the inflation.
Second, we apply forecast combination (Granger (2000) and Pesaran and
Timmermann (2001)) and generate our forecast by averaging the predictions
of a large number of models. Third, we allow for time variation in parameters
by applying rolling regression techniques, with a window of three-years of
monthly data. Backtesting shows that our strategy outperforms both the
benchmark model (i.e. a factor model which does not allow for model
uncertainty) and additional univariate (ARMA) and multivariate (VAR)
models. Our strategy proves to improve on alternative models also when
applied to turning point prediction.
This paper integrates a theory of equilibrium unemployment into a monetary model
with nominal price rigidities. The model is used to study the dynamic response of the
economy to a monetary policy shock. The labor market displays search and matching
frictions and bargaining over real wages and hours of work. Search frictions generate unemployment in equilibrium. Wage bargaining introduces a microfounded real wage
rigidity. First, I study a Nash bargaining model. Then, I develop an alternative
bargaining model, which I refer to as right-to-manage bargaining. Both models have
similar predictions in terms of real wage dynamics: bargaining significantly reduces
the volatility of the real wage. But they have different implications for inflation
dynamics: under right-to-manage, the real wage rigidity also results in smaller
fluctuations of inflation. These findings are consistent with recent evidence
suggesting that real wages and inflation only vary by a moderate amount in
response to a monetary shock. Finally, the model can explain important features of
labor-market fluctuations. In particular, a monetary expansion leads to a rise in job
creation and to a hump-shaped decline in unemployment.
This paper explores the quantitative plausibility of three candidate explanations for the
European productivity slowdown with respect to the US. The empirical plausibility of the
common wisdom on the topic (the "IT usage" hypothesis) is found to crucially depend on
how IT-using industries are defined. If a narrow definition is chosen, the IT usage
hypothesis no longer explains the whole of the EU productivity slowdown but just about
55% of it, with the remaining part to be attributed to other factors than IT, as argued in the
IT irrelevance view. No room is left for IT-producing industries as another potential
vehicle for the US-EU productivity growth gap, instead.
Abstract
Financial intermediaries can choose the extent to which they want to be active
investors, providing valuable services like advice, support and corporate governance.
We examine the determinants of the decision to become an active financial
intermediary using a hand-collected dataset on European venture capital deals. We
find organizational specialization to be a key driver. Venture firms which are
independent and focused on venture capital alone get more involved with their
companies. The human capital of venture partners is another key driver of active
financial intermediation. Venture firms whose partners' have prior business
experience or a scientific education provide more support and governance. These
results have implications for prevailing views of financial intermediation, which largely
abstract from issues of specialization and human capital.
This paper discusses the recent literature on the role of the state in economic development.
It concludes that government incentives to enact sound policies are key to economic success.
It also discusses the evidence on what happens after episodes of economic and political
liberalizations, asking whether political liberalizations strengthen government incentives to
enact sound economic policies. The answer is mixed. Most episodes of economic
liberalizations are indeed preceded by political liberalizations. But the countries that have
done better are those that have managed to open up the economy first, and only later have
liberalized their political system.
Abstract
This paper studies empirically the effects of and the interactions amongst economic and
political liberalizations. Economic liberalizations are measured by a widely used indicator
that captures the scope of the market in the economy, and in particular of policies
towards freer international trade (cf. Sachs and Werner 1995, Wacziarg and Welch 2003).
Political liberalizations correspond to the event of becoming a democracy. Using a
difference-in-difference estimation, we ask what are the effects of liberalizations on
economic performance, on macroeconomic policy and on structural policies. The main
results concern the quantitative relevance of the feedback and interaction effects
between the two kinds of reforms. First, we find positive feedback effects between
economic and political reforms. The timing of events indicates that causality is more
likely to run from political to economic liberalizations, rather than viceversa, but we
cannot rule out feedback effects in both directions. Second, the sequence of reforms
matters. Countries that first liberalize and then become democracies do much better
than countries that pursue the opposite sequence, in almost all dimensions.
We develop a structural model of a small open economy with gradual exchange rate pass-through and endogenous inertia in inflation and output. We then estimate the model by matching the implied impulse responses with those obtained from a VAR model estimated on Swedish data. Although our model is highly stylized it captures very well the responses of output, domestic and imported inflation, the interest rate, and the real exchange rate. However, in order to account for the observed persistence in the real exchange rate and
the large deviations from UIP, we need a large and volatile premium on foreign exchange.
Firing frictions and renegotiation costs affect worker and firm preferences
for rigid wages versus individualized Nash bargaining in a standard
model of equilibrium unemployment, in which workers vary by
observable skill. Rigid wages permit savings on renegotiation costs and
prevent workers from exploiting the firing friction. For standard calibrations,
the model can account for political support for wage rigidity
by both workers and firms, especially in labor markets for intermediate
skills. The firing friction is necessary for this effect, and reinforces
the impact of both turbulence and other labor market institutions on
preferences for rigid wages.
We analyse the evolution of the business cycle in the accession countries, after a careful examination of the seasonal properties of the available series and the required modification of the cycle dating procedures. We then focus on the degree of cyclical concordance within the group of accession countries, which turns out to be in general lower than that between the existing EU countries (the Baltic countries constitute an exception). With respect to the Eurozone, the indications of synchronization are also generally low and lower relative to the position obtaining for countries taking part in previous enlargements (with the exceptions of Poland, Slovenia and Hungary). In the light of the optimal currency area literature, these results cast doubts on the usefulness of adopting the euro in the near future for most accession countries, though other criteria such as the extent of trade and the gains in credibility may point in a different direction.
The accession of ten countries into the European Union makes the
forecasting of their key macroeconomic indicators such as GDP
growth, inflation and interest rates an exercise of some importance.
Because of the transition period, only short spans of reliable time series
are available which suggests the adoption of simple time series models
as forecasting tools, because of their parsimonious specification and
good performance. Nevertheless, despite this constraint on the span of
data, a large number of macroeconomic variables (for a given time
span) are available which are of potential use in forecasting, making the
class of dynamic factor models a reasonable alternative forecasting tool.
We compare the relative performance of the two forecasting approaches,
first by means of simulation experiments and then by using data for five
Acceding countries. We also evaluate the role of Euro-area information for
forecasting, and the usefulness of robustifying techniques such as
intercept corrections and second differencing. We find that factor models
work well in general, even though there are marked differences across
countries. Robustifying techniques are useful in a few cases, while
Euro-area information is virtually irrelevant.
The hazard rate of investment is derived within a real option model, and its properties
are analyzed in order to directly study the relation between uncertainty and investment.
Maximum likelihood estimates of the hazard are calculated using a sample of MNEs that
have invested in Central and Eastern Europe over the period 1990-1998. Employing a
standard, non-parametric specification of the hazard, our measure of uncertainty has a
negative effect on investment, but the reduced-form model is unable to control for nonlinearities
in the relationship. The structural estimation of the option-based hazard is
instead able to account for the non-linearities and exhibits a significant value of waiting,
though the latter is independent from our measure of uncertainty. This finding supports
the existence of alternative channels through which uncertainty can affect investment.
Equilibrium business cycle models have typically less shocks than variables.
As pointed out by Altug, 1989 and Sargent, 1989, if variables are measured with
error, this characteristic implies that the model solution for measured variables has
a factor structure. This paper compares estimation performance for the impulse
response coefficients based on a VAR approximation to this class of models and
an estimation method that explicitly takes into account the restrictions implied
by the factor structure. Bias and mean squared error for both factor based and
VAR based estimates of impulse response functions are quantified using, as data
generating process, a calibrated standard equilibrium business cycle model. We
show that, at short horizons, VAR estimates of impulse response functions are less
accurate than factor estimates while the two methods perform similarly at medium
and long run horizons.
This paper aims to test some implications of the Fiscal theory of
the price level (FTPL). We develop a model similar to Leeper (1991)
and Woodford (1996), but extended so to generate real effects of fiscal
policy also in the "Ricardian" regime, via an OLG demographic
structure. We test on the data the predictions of the FTPL as incorporated
in the model. We find that the US fiscal policy in the period
1960-1979 can be classified as "Non-Ricardian", while it is "Ricardian"
since 1990. According to our analysis, the fiscal theory of the
price level characterizes one phase of the post-war US history.
We use a quantitative model of the U.S. economy to analyze the response
of long-term interest rates to monetary policy, and compare the model results
with empirical evidence. We find that the strong and time-varying yield curve
response to monetary policy innovations found in the data can be explained by
the model. A key ingredient in explaining the yield curve response is central
bank private information about the state of the economy or about its own
target for inflation.
In this paper a simple dynamic optimization problem is solved with the help of
the recursive saddle point method developed by Marcet and Marimon (1999). According
to Marcet and Marimon, their technique should yield a full characterization
of the set of solutions for this problem. We show though, that while their method
allows us to calculate the true value of the optimization program, not all solutions
which it admits are correct. Indeed, some of the policies which it generates as
solutions to our problem, are either suboptimal or do not even satisfy feasibility.
We identify the reasons underlying this failure and discuss its implications for the
numerous existing applications.
We analyze welfare maximizing monetary policy in a dynamic general equilibrium two-country
model with price stickiness and imperfect competition. In this context, a typical terms
of trade externality affects policy interaction between independent monetary authorities. Unlike
the existing literature, we remain consistent to a public finance approach by an explicit
consideration of all the distortions that are relevant to the Ramsey planner. This strategy entails
two main advantages. First, it allows an accurate characterization of optimal policy in an economy
that evolves around a steady state which is not necessarily efficient. Second, it allows to describe
a full range of alternative dynamic equilibria when price setters in both countries are completely
forward-looking and households' preferences are not restricted. We study optimal policy both in
the long-run and in response to shocks, and we compare commitment under Nash competition
and under cooperation. By deriving a second order accurate solution to the policy functions,
we also characterize the welfare gains from international policy cooperation.
Abstract
In this paper we concentrate on the hypothesis that the empirical
rejections of the Expectations Theory (ET) of the term structure of interest
rates can be caused by improper modelling of expectations. Our
starting point is an interesting anomaly found by Campbell-Shiller (1987),
when by taking a VAR approach they abandon limited information
approach to test the ET, in which realized returns are taken as a proxy for
expected returns. We use financial factors and macroeconomic information
to construct a test of the theory based on simulating investors'
effort to use the model in 'real time' to forecast future monetary policy
rates. Our findings suggest that the importance of fluctuations of risk
premia in explaining the deviation from the ET is reduced when some
forecasting model for short-term rates is adopted and a proper evaluation
of uncertainty associated to policy rates forecast is considered.
Employment protection legislations (EPL) are not enforced uniformly across the board. There are a number of exemptions to the coverage
of these provisions: firms below a given threshold scale and workers with temporary contracts are not subject to the most restrictive rovisions. This within country variation in enforcement allows to make inferences on the impact of EPL which go beyond the usual cross-country approach. In this paper we develop a simple model which explains why these exemptions are in place to start with. Then we empirically assess the effects of EPL on dismissal probabilities, based on a double-difference approach. Our results are in line with the predictions of the theoretical model. Workers in firms exempted from EPL are more likely to be laid-off We do not observe this effect in the case of temporary workers. There is no effect of the exemption threshold on the growth of firms.
We present a theoretical model of a parliamentary democracy, where
party structures, government coalitions and fiscal policies are endogenously
determined. The model predicts that, relative to proportional elections, majoritarian
elections reduce government spending because they reduce party
fragmentation and, therefore, the incidence of coalition governments. Party
fragmentation can persist under majoritarian rule if party supporters are
unevenly distributed across electoral districts. Economic and political data,
from up to 50 post-war parliamentary democracies, strongly support our
joint predictions from the electoral rule, to the party system, to the type of
government, and to government spending.
While there is consensus on the need to raise the time spent in the market by
European women, it is not clear how these goals should be achieved. Tax wedges,
assistance in the job search process, and part-time jobs are policy instruments that
are widely debated in policy circles. The paper presents a simple model of labour
supply with market frictions and heterogenous home production where the effects of
these policies can be coherently analysed. We show that subsidies to labour market
entry increases women's entrance in the labour market, but they also increase exits from
the labour market, with ambiguous effect on employment. Subsidies to part-time do
increase employment, but they have ambiguous effects on hours and market production.
Finally, reductions in taxes on market activities that are highly substitutable with home
production have unambiguous positive effects on market employment and production.
We examine a model of contracting where parties interact repeatedly and can contract
at any point in time, but writing enforceable contracts is costly. A contract can
describe contingencies and actions at a more or less detailed level, and the cost of writing
a contract is proportional to the amount of detail. We consider both formal (externally
enforced) and informal (self-enforcing) contracts. The presence of writing costs has important
implications both for the optimal structure of formal contracts, particularly the
tradeo. between contingent and spot contracts, and for the interaction between formal
and informal contracting. Our model sheds light on these implications and generates a
rich set of predictions about the determinants of the optimal mode of contracting.
This paper presents a simple model of imperfect labor markets with endogenous labor market participation and home production. We show that a two-sector economy (home and market) implies a three-state labor market when labor market imperfections take the form of an irreversible entry cost incurred by workers. This simple framework brings several results. First, it delivers an expression for the employment rate and as side-products, a measure of the unemployment rate and the size of the labour force. Second, it rationalizes several empirical works on the definition of unemployment in labor force surveys. Third, it derives endogenously all flows between three labour market states. Fourth, a calibration of the model rationalizes di.erences in employment rates: in the US., we find a market productivity premium of +30% and market frictions of -15% compared to France. Finally, the model is a very simple reduced form of search models with which it is fully consistent: the irreversible entry cost is the opportunity cost of search and depends on aggregate conditions.
The existing literature ignores the fact that in most European countries the
strictness of Employment Protection Legislation (EPL) varies across the firm size
distribution. In Italy firms are obliged to rehire an unfairly dismissed worker only
if they employ more than 15 employees. Theoretically, the paper solves a
baseline model of EPL with threshold effects, and shows that firms close to the
threshold are characterized by an increase in inaction and by a reluctance to
grow. Empirically, the paper estimates transition probability matrices on firm
level employment using a longitudinal data set based on Italian Social Security
(INPS) records, and finds two results. First, firms close to the 15 employees
threshold experience an increase in persistence of 1.5 percent with respect to a
baseline statistical model. Second, firms with 15 employees are more likely to
move backward than upward. Finally, the paper tests the effect of a 1990 reform
which tightened the regulation on individual dismissal only for small firms. It
finds that the persistence of small firms relative to large firms increased
significantly. Overall, these threshold effects are significant and robust, but
quantitatively small.
We consider a society that has to elect an official who provides a public service
for the citizens. Potential candidates differ in their competence and every potential
candidate has private information about his opportunity cost to perform the task
of the elected official. We develop a new citizen candidate model with a unique
equilibrium to analyze citizens' candidature decisions.
Under some weak additional assumptions, bad candidates run with a higher
probability than good ones, and for unattractive positions, good candidates freeride
on bad ones. We also analyze the comparative static effects of wage increases
and cost of running on the potential candidates' entry decisions.
This paper examines competition in a liberalized market, with reference to some key features of the natural gas industry. Each firm has a low (zero) marginal cost core capacity, due to long term contracts with take or pay obligations, and additional capacity at higher marginal costs. The market is decentralized and the firms decide which customers to serve, competing then in prices. We show that under both sequential and simultaneous entry, there is a strong incentive to segment the market: when take-or-pay obligations are still to be covered, entering and competing for the same customers implies low margins. If instead a firm is left as a monopolist on a fraction of the market, xhausting its obligation, it has no further incentive to enter a second market, where the rival will be monopolist as well. Hence, we obtain entry without competition. Antitrust ceilings do not prevent such an outcome while a wholesale pool market induces generalized competition and low margins in the retail segment.
belongs to the realm of politics. We evaluate how political constraints shape the social
security system in six countries - France, Germany, Italy, Spain, the UK and the US -
under population aging. Two main aspects of the aging process are relevant to the
analysis. First, the increase in the dependency ratio - the ratio of retirees to workers
- reduces the average profitability of the unfunded social security system, thereby
inducing the agents to reduce the size of the system by substituting their claims
towards future pensions with more private savings. Second, an aging electorate leads
to larger systems, since it increases the relevance of pension spending on the
policy-makers' agenda. The overall assessment from our simulations is that the political
aspect dominates in all countries, albeit with some differences. Spain, the fastest aging
country, faces the largest increase in the social security contribution rate. When labor
market considerations are introduced, the political effect still dominates, but it is less
sizeable. Country specific characteristics (not accounted for in our simulations), such as
the degree of redistribution in the pension system and the existence of family ties in
the society, may also matter. Our simulations deliver a strong policy implication: an
increase in the effective retirement age always decreases the size of the system chosen
by the voters, while often increasing its generosity. Finally, delegation of pension policy
to the EC may reduce political accountability and hence help to reform the systems.
rival firms operating in an uncertain environment. We test the implications of the model
through a discrete choice panel data sample of MNEs that have invested in Central and
Eastern Europe over the period 1990-1997. Interacting the measure of rivals investment
in country-industry pairs with uncertainty we find strong evidence for oligopolistic reaction,
especially through the channel of Bayesian learning postulated by the model. The
findings are robust with respect to different model specifications.
trade costs can generate sizable increases in trade volumes over time. A fall in trade
costs has two e.ects on the volume of trade. First, for given factor endowments, it
raises the degree of specialization of countries, leading to a larger volume of trade
in the short run. Second, it raises the factor price of each country's abundant
production factor, leading to diverging paths of relative factor endowments across
countries and a rising degree of specialization. A simulation exercise shows that
a fall in trade costs over time produces a non-linear increase in the trade share of
output as in the data. Even when elasticities of substitution are not particularly
high, moderate reductions in trade costs lead to large trade volumes over time.
menu of labor market outcomes. We document this neglected trade-off of
globalization for a sample of Indian manufacturing firms. On the one hand,
the employees of firms subject to foreign competition face a more uncertain
stream of earnings and riskier employment prospects. On the other, they enjoy
a more rapid career and/or have more opportunities to train and upgrade
their skills. The negative uncertainty costs and the positive incentive effects
of globalization are thus twin to each other. Concentrating on just one side
of the coin gives a misleading picture of globalization.
employment protection legislation (EPL) in the provision of insurance against labour
market risk. The mix of quantity restrictions and price regulations adopted by the
various countries would seem to correspond to a stable politico-economic equilibrium.
We develop a model in which voters are required to cast a ballot over the strictness of
EPL and over the generosity of UB. Agents are heterogeneous along two dimensions:
employment status - there are insiders and outsiders - and skills - low and high skills.
We show that if there exists a majority of low-skill insiders, the voting game has a
politico-economic equilibrium with low UB and high EPL; otherwise, the equilibrium
features high UB and low EPL. Another testable implication of the model is that a
larger share of elderly workers increases the demand for EPL. Panel data on institutions
and on the age and educational structures of the populations are broadly in line with
our results. We also find that those favouring EPL over UB in a public opinion poll
carried in 2001 in Italy have precisely the same characteristics predicted by our model.
Policies are typically chosen by politicians and bureaucrats. This paper investigates the criteria that should lead a society to allocate policy tasks to elected policymakers (politicians) or non elected bureaucrats. Politicians tend to be preferable for tasks that have the following features: they do not involve too much
specific technical ability relative to effort; there is uncertainty ex ante about ex post preferences of the public and flexibility is valuable; time inconsistency is not an issue; small but powerful vested interests do not have large stakes in the policy outcome; effective decisions over policies require taking into account policy
complementarities and compensating the losers; the policies imply redistributive conflicts among large groups of voters. The reverse apply to the attribution of prerogatives to bureaucrats.
area countries, evaluate the degree of syncronization, and compare the results with the UK and the US. Fourth, we construct indices of business cycle diffusion, and assess how spread are cyclical movements throughout the economy. Finally, we repeat the dating exercise using monthly industrial production data, to evaluate whether the higher sampling frequency can compensate the higher variability of the series and produce a more accurate dating.
from a large data set for forecasting, namely, the use of an automated model selection
procedure, the adoption of a factor model, and single-indicator-based forecast pooling. The
comparison is conducted using a large set of indicators for forecasting US inflation and GDP
growth. We also compare our large set of leading indicators with purely autoregressive
models, using an evaluation procedure that is particularly relevant for policy making. The
evaluation is conducted both ex-post and in a pseudo real time context, for several forecast
horizons, and using both recursive and rolling estimation. The results indicate a preference for
simple forecasting tools, with a good relative performance of pure autoregressive models, and
substantial instability in the leading characteristics of the indicators.
inflation and GDP growth. Our evaluation is based on using the variables in the ECB Euroarea
model database, plus a set of similar variables for the US. We compare the forecasting
performance of each indicator with that of purely autoregressive models, using an evaluation
procedure that is particularly relevant for policy making. The evaluation is conducted both expost
and in a pseudo real time context, for several forecast horizons, and using both recursive
and rolling estimation. We also analyze three different approaches to combining the
information from several indicators. First, we discuss the use as indicators of the estimated
factors from a dynamic factor model for all the indicators. Second, an automated model
selection procedure is applied to models with a large set of indicators. Third, we consider
pooling the single indicator forecasts. The results indicate that single indicator forecasts are on
average better than those derived from more complicated methods, but for them to beat the
autoregression a different indicator has to be used in each period. A simple real-time
procedure for indicator-selection produces good results.
rule-based empirical macro models for the analysis of monetary policy.
These models, based on the conventional view that inflation
stabilization should be a concern of monetary policy only, have typically neglected
the role of fiscal policy. We start with the evidence that a baseline
VAR-augmented Taylor rule can deliver recurrent mispredictions of
inflation in the U.S. before 1987. We then show that a fiscal feed-back rule, in
which the primary deficit reacts to both the output gap and the
government debt, can well characterize the behavior of fiscal policy throughout the
sample. However, by employing Markov-switching methods, we find
evidence of substantial instability across fiscal regimes. Yet this precisely happens
\QTR{it}{before 1987}. We then augment the monetary VAR\ with a
fiscal policy rule and control for the endogenous regime switches for both
rules. We find that only over time windows belonging to the pre-1987 period
the model based on the two rules can predict the behavior of \ inflation
better than the one based just on the monetary policy rule. \QTR{it}{After
1987}, when fiscal policy is estimated to switch to a regime of fiscal discipline,
the monetary-fiscal mix can be appropriately described as a regime of
monetary dominance. Over this period a monetary policy rule based
model is always a better predictor of the inflation behavior than the one
comprising both a monetary and a fiscal rule.
of the optimal monetary policy design problem as well as of simple feedback
rules. The international relative price channel is emphasized as the one peculiar
to the open economy dimension of monetary policy. Hence flexibility in
the nominal exchange rate enhances such channel. We first show that a feature
of the optimal policy under commitment, unlike the one under discretion,
is to entail stationary nominal exchange rate and price level. We show that
this property characterizes also a regime of fixed exchange rates. Hence, in
evaluating the desirability of such a regime, this benefit needs to be weighed
against the cost of excess smoothness in the terms of trade. We show that
there exist combinations of the parameter values that make a regime of fixed
exchange rates more desirable than the discretionary optimal policy. When the
economy is sufficiently open, this happens for a high relative weight assigned to
output gap variability in the Central Bank's loss function and for high values of
the elasticity substitution between domestic and foreign goods. We draw from
this interesting conclusions for a modern version of the optimal currency area
literature.
welfare-state spending - display systematic patterns in the vicinity of
elections? And do such electoral cycles differ among political systems?
We investigate these questions in a data set encompassing sixty democracies
from 1960-98. Without conditioning on the political system, we find
that taxes are cut before elections, painful fiscal adjustments are postponed
until after the elections, while welfare-state spending displays no
electoral cycle. Our subsequent results show that the pre-election tax cuts
is a universal phenomenon. The post-election fiscal adjustments (spending
cuts, tax hikes and rises in surplus) are, however, only present in
presidential democracies. Moreover, majoritarian electoral rules alone are
associated with pre-electoral spending cuts, while proportional electoral
rules are associated with expansions of welfare spending both before and
after elections.
on the distribution of production factors in the world and parameter values, allows for
worldwide factor price equalization or complete specialization. We explore the dynamics
of the model under different parameter values, and relate our theoretical results to the
empirical literature that studies the determinants of countries' income per capita growth
and levels. In general, the model is capable of generating predictions in accordance with
the most important ndings in the empirical growth literature. At the same time, it
avoids some of the most serious problems of the (autarkic) neoclassical growth model.
We propose a novel methodology to deal with model uncertainty based on thick modeling, i.e. on considering a multiplicity of predictive models rather than a single predictive model. We show that portfolio allocations based on a thick modelling strategy sistematically overperforms thin modelling.
In a Common Currency Area (CCA) the Common Central Bank sets a uniform rate of inflation across countries, taking into account the areas economic conditions. Suppose that countries in recession favor a more expansionary policy than countries in expansion, a conflict of interest between members arises when national business cycles are not fully synchronized. If governments of member countries have an informational advantage over the state of their domestic economy, such conflict may create an adverse selection problem: national authorities overemphasize their shocks, in order to shape the common policy towards their needs. This creates an inefficiency over and above the one-policy-fits-all cost discussed in the optimal currency area literature. In order to minimize this extra-burden of asymmetric information, monetary policy must over-react to large symmetric shocks and under-react to small asymmetric ones. The result is sub-optimal volatility of inflation.
After the creation of the European Monetary Union (EMU), both the European Commission (EC) and the European Central Bank (ECB) are focusing more and more on the evolution of the EMU as a whole, rather than on single member countries. A particularly relevant issue from a policy point of view is the availability of reliable forecasts for the key macroeconomic variables. Hence, both the fiscal and the monetary authorities have developed aggregate forecasting models, along the lines previously adopted for the analysis of single countries. A similar approach will be likely followed in empirical analyses on, e.g., the existence of an aggregate Taylor rule or the evaluation of the aggregate impact of monetary policy shocks, where linear specifications are usually adopted. Yet, it is uncertain whether standard linear models provide the proper statistical framework to address these issues. The process of aggregation across countries can produce smoother series, better suited for the analysis with linear models, by averaging out country specific shocks. But the method of construction of the aggregate series, which often involves time-varying weights, and the presence of common shocks across the countries, such as the deflation in the early 1980s and the convergence process in the early 1990s, can introduce substantial non-linearity into the generating process of the aggregate series. To evaluate whether this is the case, we fit a variety of non-linear and time-varying models to aggregate EMU macroeconomic variables, and compare them with linear specifications. Since non-linear models often over-fit in sample, we assess their performance in a real time forecasting framework. It turns out that for several variables linear models are beaten by non-linear specifications, a result that questions the use of standard linear methods for forecasting and modeling EMU variables.
Revised version: June 28, 2002
Despite the fast catching-up in ICT diffusion experienced by most EU countries in the last few years, information technologies have so far delivered little productivity gains in Europe. In the second half of the past decade, the growth contributions from ICT capital rose in six EU countries only (the UK, Denmark, Finland, Sweden, Ireland and Greece). Quite unlike the United States, this has not generally been associated to higher labour or total factor productivity growth rates, the only exceptions being Ireland and Greece. Particularly worrisome, the large countries in Continental Europe (Germany, France, Italy and Spain) showed stagnating or mildly declining growth contributions from ICT capital, together with definite declines in TFP growth compared to the first half of the 1990s. It looks like that the celebrated Solow paradox on the lack of correlation between ICT investment and productivity growth has fled the US to migrate to Europe.
It is rather common to have several competing forecasts for the same variable, and many methods have been suggested to pick up the best, on the basis of their past forecasting performance. As an alternative, the forecasts can be combined to obtain a pooled forecast, and several options are available to select what forecasts should be pooled, and how to determine their relative weights. In this paper we compare the relative performance of alternative pooling methods, using a very large dataset of about 500 macroeconomic variables for the countries in the European Monetary Union. In this case the forecasting exercise is further complicated by the short time span available, due to the need of collecting a homogeneous dataset. For each variable in the dataset, we consider 58 forecasts produced by a range of linear, time-varying and non-linear models, plus 16 pooled forecasts. Our results indicate that on average combination methods work well. Yet, a more disaggregate analysis reveals that single non-linear models can outperform combination forecasts for several series, even though they perform rather badly for other series so that on average their performance is not as good as that of pooled forecasts. Similar results are obtained for a subset of unstable series, the pooled forecasts behave only slightly better, and for three key macroeconomic variables, namely, industrial production, unemployment and inflation.
In this paper we evaluate the relative performance of linear, non-linear and time-varying models for about 500 macroeconomic variables for the countries in the Euro area, using real-time forecasting methodology. It turns out that linear models work well for about 35% of the series under analysis, time-varying models for another 35% and on-linear models for the remaining 30% of the series. The gains in forecasting accuracy from the choice of the best model can be substantial, in particular for longer forecast horizons.These results emerge from a detailed is aggregated analysis, while they are hidden when an average loss function is used. To explore in more detail the issue of parameter instability, we then apply a battery of tests, detecting non-constancy in about 20-30% of the time series. For these variables the forecasting performance of the time-varying and non-linear models further improves, with larger gains for a larger fraction of the series. Finally, we evaluate whether non-linear models perform better for three key macroeconomic variables: industrial production, inflation and unemployment. It turns out that this is often the case. Hence, overall, our results indicate that there is a substantial amount of instability and non-linearity in the EMU, and suggest that it can be worth going beyond linear models for several EMU macroeconomic variables.