Working papers
IGIER fellows and affiliates publish books and articles in academic journals. Their current research projects are featured in the Working Paper series.
We study panel data regression models when the shocks of interest are aggregate and possibly small relative to idiosyncratic noise. This speaks to a large empirical literature that targets impulse responses via panel local projections. We show how to interpret the estimated coefficients when units have heterogeneous responses and how to obtain valid standard errors and confidence intervals. A simple recipe leads to robust inference: including lags as controls and then clustering at the time level. This strategy is valid under general error dynamics and uniformly over the degree of signal-to-noise of macro shocks.
Most societies in the world contain strong group identities and the culture supporting these groups is highly persistent. This persistence in turn gives rise to a
practical problem: how do and should societies with strong group identities organize themselves for exchange and public good provision? In this paper, we develop a theoretical framework that allows us to study, normatively and positively, the relationship between social structure, state capacity, and economic activity.
Are the players “commonly meta-certain” of an interactive belief model itself? The paper formalizes what it means by: “a player is (meta-)certain of her own belief-generating map” or “the players are (meta-)certain of the profile of belief-generating maps (i.e., the model).” The paper shows: a player is (meta-)certain of her own belief-generating map if and only if her beliefs are introspective. The players are commonly (meta-)certain of the model if and only if, for any event which some player i believes at some state, it is common belief at the state that player i believes the event. This paper then asks whether the “common meta-certainty” assumption is needed for epistemic characterizations of game-theoretic solution concepts. The paper shows: common belief in rationality leads to actions that survive iterated elimination of strictly dominated actions, as long as each player is logical and (meta-)certain only of her own strategy and belief-generating map
Algorithms are becoming the standard tool for bidding in auctions through which digital advertising is sold. To explore how algorithmic bidding might affect functioning of these auctions, this study undertakes a series of simulated experiments where bidders employ Artificial Intelligence algorithms (Q-learning and Neural Network) to bid in online advertising auctions. We consider both the generalized second-price (GSP) auction and the Vickrey-Clarke-Groves (VCG) auction. We find that the more detailed information is available to the algorithms, the better it is for the efficiency of the allocations and the advertisers profit. Conversely, the auctioneer revenues tend to decline as more complete information is available to the advertiser bidding algorithms. We also compare the outcomes of algorithmic bidding to those of equilibrium behavior in a range of different specifications and find that algorithmic bidding has a tendency to sustain low bids both under the GSP and VCG relative to competitive benchmarks. Moreover, the auctioneer revenues under the VCG setting are either close to or lower than those under the GSP setting. In addition, we consider three extensions commonly observed in the data: introduction of a non-stategic player, bidding through a common intermediary, and asymmetry of the information across bidders. Consistent with the theory, the non-strategic player presence leads to increased efficiency, whereas bidding through a common intermediary leads to lower auctioneer revenue compared to the case of individual bidding. Moreover, in experiments with information asymmetry, more informed players earn higher rewards.
We evaluate how traditional parties may respond to populist parties on issues aligning with populist messages. During the 2020 Italian referendum on the reduction of members of Parliament, we conducted a large-scale field experiment, exposing 200 municipalities to nearly a million impressions of programmatic advertisement. Our treatments comprised two video ads against the reform: one debunking populist rhetoric and another attributing blame to populist politicians. This anti-populist campaign proved effective through demobilization, as it reduced both turnout and the votes in favor of the reform. Notably, the effects were more pronounced in municipalities with lower rates of college graduates, higher unemployment, and a history of populist votes. This exogenous influence introduced a unique populist dynamic, observable in the 2022 national election where treated municipalities showed increased support for Brothers of Italy, a rising populist party, and decreased support for both traditional parties and the populists behind the 2020 reform. A follow-up survey further showed increased political interest and diminished trust in political institutions among the residents of municipalities targeted by the campaign.
In a recent paper, Lin & Palfrey (2024) developed a theory of cognitive hierarchies (CH) in sequential games and observed that this solution concept is not reduced-normal-form invariant. In this note I qualify and explain this observation. I show that the CH model is normal-form invariant, and that the differences arising from the application of the CH model to the reduced normal form depend only on how randomization by level-0 types is modeled. Indeed, while the uniform behavior strategy in the extensive form yields the uniform mixed strategy in the normal form, the latter does not correspond to the uniform randomization in the reduced normal form, because different reduced strategies may correspond to sets of equivalent strategies with different cardinalities. I also comment on (i) the invariance of the CH model to some transformations of the sequential game, and (ii) the independence of conditional beliefs about co-players' level-types.
We study whether a better knowledge of the functioning of pay-as-you-go pension systems and recent demographic trends affects natives’ attitudes towards immigration. In two online experiments conducted in Italy and Spain, we randomly treated participants with a video explaining how, in pay-as-you-go systems, the payment of current pensions depends on the contributions paid by current workers. The video also informs participants about population aging trends in their countries. The treatment increases knowledge of pay-as-you-go systems and future demographic trends for all participants. However, it improves attitudes towards migrants only for treated participants who do not support populist and anti-immigrant parties.
We document the spiral of populism in Europe and the direct and indirect role of economic insecurity shocks. Using survey data on individual voting, we make two contributions to the literature, namely: (1) Economic insecurity shocks have a significant impact on the populist vote share, directly as demand for protection, and
indirectly through the induced changes in trust and attitudes; (2) A key consequence of increased economic insecurity is a drop in turnout. The impact of this largely neglected turnout effect is substantial: conditional on voting, when economic insecurity increases almost 40% of the induced change in the vote for a populist party comes from the turnout channel.
This paper empirically shows that the imbalance between an ethnic group’s political and military power is crucial to understanding the likelihood that such a group engages in a conflict. We develop a novel measure of a group’s military power by combiningmachine learning techniques with rich data on ethnic group characteristics and
outcomes of civil conflicts in Africa and theMiddle East. We couple thismeasure with available indicators of an ethnic group’s political power as well as with a novel proxy based on information about the ethnicity of cabinet members. We find that groups characterized by a highermismatch betweenmilitary and political power are between 30% and 50% more likely to engage in a conflict against their government depending on the specification used. We also find that the effects of power mismatch are nonlinear, which is in agreement with the predictions of a simplemodel that accounts for the cost of conflict. Moreover, our results suggest that high-mismatched groups are typically involved in larger and centrist conflicts. The policy implication is that powersharing recommendations and institutional design policies for peace should consider primarily the reduction of power mismatches between relevant groups, rather than focusing exclusively on equalizing political power in isolation.
We analyze the infinite repetition with imperfect feedback of a simultaneous or sequential game, assuming that players are strategically sophisticated---but impatient---expected-utility maximizers. Sophisticated strategic reasoning in the repeated game is combined with belief updating to provide a foundation for a refinement of self-confirming equilibrium. In particular, we model strategic sophistication as rationality and common strong belief in rationality. Then, we combine belief updating and sophisticated reasoning to provide sufficient conditions for a kind of learning---that is, the ability, in the limit, to exactly forecast the sequence of future observations---thus showing that impatient agents end up playing a sequence of self-confirming equilibria in strongly rationalizable conjectures of the one-period game.
How do people form beliefs about novel risks, with which they have little or no experience? Motivated by survey data we collected in 2020, which showed that beliefs about Covid’s lethality depended on a range of personal experiences in unrelated domains, we build a model based on the psychology of selective memory. When a person thinks about an event, different experiences compete for retrieval, and retrieved experiences are used to simulate the event based on how similar they are to it. The model yields predictions on how experiences interfere with each other in recall and how non domain-specific experiences bias beliefs based on their similarity to the assessed event. We test these predictions using data from our Covid survey and from a primed-recall experiment about cyberattack risk. Experiences and their measured similarity to the cued event successfully help explain beliefs, with patterns consistent with our theory. Our approach offers a new, structured way to study and jointly account for systematic biases and substantial belief heterogeneity.
We construct an index of long term expected earnings growth for S&P500 firms and show that it has remarkable power to jointly predict future errors in these expectations and stock returns, in both the aggregate market and the cross section. The evidence supports a mechanism whereby good news cause investors to become too optimistic about long term earnings growth, for the market as a whole but especially for a subset of firms. This leads to inflated stock prices and, as beliefs are systematically disappointed, to subsequent low returns in the aggregate market and for the subset of firms. Overreaction of long term expectations helps resolve or asset pricing puzzles without time series or cross-sectional variation in required returns.
We document two new facts about the distributions of answers in famous statistical problems: they are i) multi-modal and ii) unstable with respect to irrelevant changes in the problem. We offer a model in which, when solving a problem, people represent each hypothesis by attending “bottom up” to its salient features while neglecting other, potentially more relevant, ones. Only the statistics associated with salient features are used, others are neglected. The model unifies Gambler’s Fallacy, its variation by sample size, under- and overreaction in inference, and insensitivity to multiple signals, all as a byproduct of selective attention. The model also makes new predictions on how controlled changes in the salience of specific features should jointly shape measured attention and biases. We test and confirm these predictions experimentally, including by measuring attention and documenting novel biases predicted by the model. Bottom-up attention to features emerges as a unifying framework for biases conventionally explained using a variety of stable heuristics or distortions of the Bayes rule.
This handbook chapter studies how natural resource wealth can in many contexts fuel armed conflict. Starting from a simple theoretical model, we stress the role of geography and power mismatch in the so called "natural resource curse". Drawing on recent empirical evidence, the importance of resource abundance, asymmetry and capital-intensiveness is highlighted, alongside local grievances and international interventions. We propose a series of evidence-driven policy conclusions, ranging from "smart green transition" and democratic institution building over labor-market intervention to a series of specific policies requiring international coordination.
This paper discusses the historical and social origins of the bifurcation in the political institutions of China and Western Europe. An important factor, recognized in the literature, is that China centralized state institutions very early on, while Europe remained politically fragmented for much longer. These initial differences, however, were amplified by the different social organizations (clans in China, corporate structures in Europe) that spread in these two societies at the turn of the first millennium AD. State institutions interacted with these organizations, and were shaped and influenced by this interaction. The paper discusses the many ways in which corporations contributed to the emergence of representative institutions and gave prominence to the rule of law in the early stages of state formation in Europe, and how specific features of lineage organizations contributed to the consolidation of the Imperial regime in China.
This paper explores the tradeoff between competition and financial inclusion given by the vertical integration between mobile network and money operators. Joining novel data on mobile money fees built through the WayBack machine, with sources on network coverage and financials, we examine the staggering across African operators and countries of platform interoperability – a policy that promotes transactions and competition across mobile money operators. Our findings show that interoperability lowers mobile money fees and reduces network coverage and mobile towers, especially in rural and poor districts. Interoperability also results in a decline in various survey metrics of financial inclusion.
We compute new estimates for Total Factor Productivity (TFP) growth in five European countries and in the United States. Departing from standard methods, we account for positive profits and use firm surveys to proxy for unobserved changes in factor utilization. These novelties have a major impact in Europe, where our estimated TFP growth series are less volatile and less cyclical than the ones obtained with standard methods. Based on our approach, we provide annual industry-level and aggregate TFP series, as well as the first estimates of utilization-adjusted quarterly TFP growth in Europe.
JEL Codes: E01, E30, O30, O40
We study the stabilizing role of benefit extensions. We develop a tractable quantitative model with heterogeneous agents, search frictions, and nominal rigidities. The model allows for a stabilizing aggregate demand channel and a destabilizing labor market channel. We characterize each channel analytically and find that aggregate demand effects quantitatively prevail in the US. When feeding-in estimated shocks, the model tracks unemployment in the two most recent downturns. We find that extensions lowered unemployment by a maximum of 0.35 pp in the Great Recession, while the joint stabilizing effect of extensions and benefit compensation peaked at 1.08 pp in the pandemic.
We offer a theory of changing dimensions of political polarization based on endogenous social identity. We formalize voter identity and stereotyped beliefs as in Bonomi et al. (2021), but add parties that compete on policy and also spread or conceal group stereotypes to persuade voters. Parties are historically connected to different social groups, whose members are more receptive to the ingroup party messages. An endogenous switch from class to cultural identity accounts for three major observed changes: i) growing conflict over cultural issues between voters and between parties, ii) dampening of political conflict over redistribution, despite rising inequality, and iii) a realignment of lower class voters from the left to the right. The incentive of parties to spread stereotypes is a key driver of identity-based polarization. Using survey data and congressional speeches we show that - consistent with our model - there is evidence of i) and ii) also in the voting realignment induced by the ”China Shock” (Autor et al. 2020).
I show that offering monetary rewards to whistleblowers can backfire as a moral aversion to being paid for harming others can reverse the effect of financial incentives. I run a field experiment with employees of the Afghan Ministry of Education, who are asked to confidentially report on their colleagues’ attendance. I use a two-by-two design, randomizing whether or not reporting absence carries a monetary incentive as well as the perceived consequentiality of the reports. In the consequential treatment arm, where employees are given examples of the penalties that might be imposed on absentees, 15% of participants choose to denounce their peers when reports are not incentivized. In this consequential group, rewards backfire: only 10% of employees report when denunciations are incentivized. In the non-consequential group, where participants are guaranteed that their reports will not be forwarded to the government, only 6% of employees denounce absence without rewards. However, when moral concerns of harming others are limited through the guarantee of non-consequentiality, rewards do not backfire: the incentivized reporting rate is 12%
Debt moratoria that allow borrowers to postpone loan payments are a frequently used tool intended to soften the impact of economic crises. We conduct a nationwide experiment with a large consumer lender in India to study how debt forbearance offers affect loan repayment and banking relationships. In the experiment, borrowers receive forbearance offers that are presented either as an initiative of their lender or the result of government regulation. We find that delinquent borrowers who are offered a debt moratorium by their lender are 4 percentage points (7 percent) less likely to default on their loan, while forbearance has no effect on repayment if it is granted by the regulator. Borrowers who are offered forbearance by their lender also have higher demand for future interactions with the lender: in a follow-up experiment conducted several months after the main intervention, demand for a non-credit product offered by the lender is 10 percentage points (27 percent) higher among customers who were offered repayment flexibility by the lender than among customers who received a moratorium offer presented as an initiative of the regulator. Overall, our results suggest that, rather than generating moral hazard, debt forbearance can improve loan repayment and support the creation of longer-term banking relationships not only for liquidity but also for relational contracting reasons. This provides a rationale for offering repayment flexibility even in settings where lenders are not required to provide forbearance.
Real-world contests are inherently uncertain since the player who exerts the highest effort can still lose. In this paper, I consider a general asymmetric incomplete information contest model with a nonparametric distribution of uncertainty in the contest success function. It generalizes all-pay auctions, Tullock contests, and rank-order tournaments with two asymmetric players. Uncertainty in the contest success function summarizes other factors that influence the contest win outcome apart from the efforts of the players, such as, for example, players’ reputation or luck. First, I nonparametrically identify and estimate the distribution of uncertainty using the information on contest win outcomes and efforts. Next, I nonparametrically identify and estimate the distributions of the players’ costs of exerting effort. The model provides a method to disentangle two sources of player’s advantage: asymmetry in the costs’ distributions and the effect of the uncertainty distribution on the winning probability. As an empirical example, I apply the model to the U.S. House of Representatives elections.
We study mean-variance approximations for a large class of preferences. Compared to the standard mean-variance approximation that only features a risk variability term, a novel index of variability appears. Its neglect in an empirical estimation may result in puzzling in ated risk terms of standard mean-variance approximations.
We consider a model of a limit order book and determine the optimal tick size set by a social planner who maximizes the welfare of market participants. In a 2-period model where only two agents arrive sequentially, the tick size is a friction that constrains investors to use discrete price grids, and as a consequence the optimal tick size is equal to zero. However, in a model with sequential arrival of more than two investors who can endogenously either take liquidity or supply liquidity by undercutting or queuing behind existing orders, the tick size is positive: it is a strategic tool a social planner uses to optimally affect the choice made by investors between liquidity demand and supply. In addition, the optimal tick size is a function both of the value of the asset and of trading volume. The policy implication of such findings is that the European tick size regime and the “Intelligent Ticks” Nasdaq proposal dominate Reg. NMS Rule 612 that formalizes the tick size regime for the U.S. markets. Using data from the U.S. and the European markets we test our model’s empirical predictions.
Why, in the face of scandals and misbehaviors, do partisan supporters hardly change their minds about their favored candidates? We study individuals’ online engagement with negative news on candidates in the 2016 US Presidential Election. Compared to independents, partisan users avoid commenting bad news on their favorite candidate, but seek them on its opponent, a political “ostrich effect”. When they do comment on bad news about their candidate, they try to rationalize them, display a more negative sentiment, and are more likely to cite scandals of the opponent. This behavior is consistent with the predictions of a model of online interactions where paying attention to non-consonant news is emotionally or psychologically costly, while paying attention to consonant ones is pleasing. Because users enjoy receiving positive feedback on their views, intrinsic biases that drive ideological segregation are amplified on social media.
We explore how business groups use internal labor markets (ILMs) in response to changing economic conditions. We show that following the exit of a large industry competitor, groupaffiliated firms expand and gain market share by increasing their reliance on the ILM to ensure swift hiring, especially of technical managers and skilled blue collar workers. The ability to take advantage of this shock to growth opportunities is greater in firms with closer access to their affiliates’ human capital, as geographical proximity facilitates employee relocations across units. Overall, our findings point to the ILM as a prominent mechanism making affiliation with a business group valuable at times of change. For the ILM to perform its role in the face of industry shocks, group sectoral diversification must be combined with geographical proximity between affiliates.
We study the implications of employment targets on firm dynamics during the privatization of the East German economy. Exploiting novel contract-level data, we document three stylized facts. First, the policy distorted firm size choices and generated bunching of firms around their committed employment target. Second, exploiting heterogeneous labor preferences of privatizers, we show that assigning tight commitments to firms causes an increase in employment growth and leads to higher productivity growth. Finally, tighter commitments also result in significant costs by leading to increased firm exit. We interpret these results through the lens of a dynamic model with endogenous productivity growth at the firm level. The model highlights that while tight commitments distort the employment decision statically and lead to a higher exit probability, they also induce a “catch-up” increase in productivity growth. This is because although firm profits are lower under tight commitments, marginal profits with respect to productivity are higher. We calibrate the model to our data and find that the policy lead to a 3 percentage points higher aggregate TFP growth thanks to the productivity improvements of firms with tight contracts.
We investigate the impact of prices on ratings using Airbnb data. We theoretically illustrate two opposing channels: higher prices reduce the value for money, worsening ratings, but they increase the taste-based valuation of the average traveler, improving ratings. Results from panel regressions and a regression discontinuity design suggest a dominant value-for-money effect. In line with our model, hosts strategically complement lower prices with higher effort more when ratings are relatively low. Finally, we provide evidence that, upon entry, strategic hosts exploit the dominant value-for-money effect. The median entry discount of seven percent improves medium-run monthly revenues by three percent.
We propose that the mathematical representation of situations of strategic interactions, i.e., of games, should separate the description of the rules of the game from the description of players’ personal traits. Yet, we note that the standard extensive-form partitional representation of information in sequential games does not comply with this separation principle. We offer an alternative representation that extends to all (finite) sequential games the approach adopted in the theory of repeated games with imperfect monitoring, that is, we describe the flow of information accruing to players rather than the stock of information retained by players, as encoded in information partitions. Mnemonic abilities can be represented independently of games. Assuming that players have perfect memory, our flow representation gives rise to information partitions satisfying perfect recall. Different combinations of rules about information flows and of players mnemonic abilities may give rise to the same information partition . All extensive-form representations with information partitions, including those featuring absentmindedness, can be generated by some such combinations.
Macroeconomic outcomes depend on the distribution of markups across firms and over time, making firm-level markup estimates key for macroeconomic analysis. Methods to obtain these estimates require data on the prices that firms charge. Firm-level data with wide coverage, however, primarily comes from financial statements, which lack information on prices. We use an analytical framework to show that trends in markups or the dispersion of markups across firms can still be well-measured with such data. Finding the average level of the markup does require pricing data, and we propose a consistent estimator for such settings. We validate the analytical results with simulations of a quantitative macroeconomic model and firm-level administrative production and pricing data. Our analysis supports the use of financial data to measure trends in aggregate markups.
We provide two characterizations, one axiomatic and the other neuro-computational, of the dependence of choice probabilities on deadlines, within the widely used softmax representation (see below picture) where pt (a; A) is the probability that alternative a is selected from the set A of feasible alternatives if t is the time available to decide, is a time dependent noise parameter measuring the unit cost of information, u is a time independent utility function, and a is an alternative-specific bias that determines the initial choice probabilities and possibly reflects prior information. Our axiomatic analysis provides a behavioral foundation of softmax (also known as Multinomial Logit Model when a is constant). Our neuro-computational derivation provides a biologically inspired algorithm that may explain the emergence of softmax in choice behavior. Jointly, the two approaches provide a thorough understanding of soft-maximization in terms of internal causes (neurophysiological mechanisms) and external effects (testable implications).
Journal of Economic Literature Classification Numbers: C70, D83
Journal of Economic Literature Classification Numbers: C70, D83
We evaluate linear stochastic discount factor models using an ex-post portfolio metric: the realized out-of-sample Sharpe ratio of mean-variance portfolios backed by alternative linear factor models. Using a sample of monthly US portfolio returns spanning the period 1968-2016, we find evidence that multifactor linear models have better empirical properties than the CAPM, not only when the cross-section of expected returns is evaluated in-sample, but also when they are used to inform one-month ahead portfolio selection. When we compare portfolios associated to multifactor models with mean-variance decisions implied by the single-factor CAPM, we document statistically significant differences in Sharpe ratios of up to 10 percent. Linear multifactor models that provide the best in-sample fit also yield the highest realized Sharpe ratios.
The Italian civil war and the Nazi occupation of Italy occurred at a critical juncture, just before the birth of a new democracy. We study the impact of these traumatic events by exploiting geographic heterogeneity in the duration and intensity of civil war, and the persistence of the battlefront along the "Gothic line" cutting through Northern-Central Italy. We find that the Communist Party gained votes in postwar elections where the Nazi occupation lasted longer, mainly at the expense of centrist parties. This effect persists until the late 1980s and appears to be driven by equally persistent changes in political attitudes.
We provide both an axiomatic and a neuropsychological characterization of the dependence of choice probabilities on time in the softmax (or Multinomial Logit Process) form (see below picture) MLP is the most widely used model of preference discovery in all fields of decision making, from Quantal Response Equilibrium to Discrete Choice Analysis, from Psychophysics and Neuroscience to Combinatorial Optimization. Our axiomatic characterization of softmax permits to empirically test its descriptive validity and to better understand its conceptual underpinnings as a theory of agents'rationality. Our neuropsychological foundation provides a computational model that may explain softmax emergence in human behavior and that naturally extends to multialternative choice the classical Diffusion Model paradigm of binary choice. These complementary approaches provide a complete perspective on softmaximization as a model of preference discovery, both in terms of internal (neuropsychological) causes and external (behavioral) effects.
What explains the formation and decay of clusters of creativity? We match data on notable
individuals born in Europe between the XIth and the XIXth century with historical city data.
The production and attraction of creative talent is associated with city institutions that protected economic and political freedoms and promoted local autonomy. Instead, indicators of local economic conditions such as city size and real wages, do not predict creative clusters. We also show that famous creatives are spatially concentrated and clustered across disciplines, that their spatial mobility has remained stable over the centuries, and that creative clusters are persistent but less than population.
uential findings that, we argue, face serious identification problems. Thus, while banks with low capital can be an important source of aggregate inefficiency in the long run, their contribution to the severity of the great recession via capital misallocation was modest.
We study monotone, continuous, and quasiconcave functionals defifined over an M-space. We show that if g is also Clarke-Rockafellar differentiable at (see below picture) , then the closure of Greenberg- Pierskalla differentials at x coincides with the closed cone generated by the Clarke-Rockafellar differentials at x. Under the same assumptions, we show that the set of normalized Greenberg-Pierskalla differentials at x coincides with the closure of the set of normalized Clarke-Rockafellar differentials at x. As a corollary, we obtain a differential characterization of quasiconcavity a la Arrow and Enthoven (1961) for Clarke-Rockafellar differentiable functions.
Does welfare improve when firms are better informed about the state of the economy and can better coordinate their decisions? We address this question in an elementary business-cycle model that highlights how the dispersion of information can be the source of both nominal and real rigidity. Within this context we develop a taxonomy for how the social value of information depends on the two rigidities, on the sources of the business cycle, and on the conduct of monetary policy.
a Hahn-Banach Theorem for modules of this kind;
a topology on the f-algebra that has the special feature of coinciding with the norm topology when the algebra is a Banach algebra and with the strong order topology of Filipovic, Kupper, and Vogelpoth (2009), when the algebra of all random variables on a probability space is considered.
As a leading example, we study in some detail the duality of conditional Lp-spaces.
Maccheroni, Marinacci, and Rustichini [17], in an Anscombe-Aumann framework, axiomatically characterize preferences that are represented by the variational utility functional where u is a utility function on outcomes and c is an index of uncertainty aversion. In this paper, for a given variational preference, we study the class of functions c that represent V. Inter alia, we show that this set is fully characterized by a minimal and a maximal element, c* and d*. The function c*, also identified by Maccheroni, Marinacci, and Rustichini [17], fully characterizes the decision maker's attitude toward uncertainty, while the novel function d* characterizes the uncertainty perceived by the decision maker.
the proposal to legalize paying these bribes while increasing fines on accepting them.
We explore performance as regards corruption deterrence and public service provision. Costs of verifying reports make the scheme more effective against larger bribes and where institutions' quality is higher. A modified scheme, where immunity is conditional on reporting, addresses some key objections. The mechanism works better against more distortionary forms of corruption than harassment bribes, provided monetary rewards can compensate bribers for losing the object of the corrupt exchange. Results highlight strong complementarities with policies aimed at improving independence and accountability of law enforcers.
patterns of political representation and the identity of elected legislators? This paper uses an important electoral reform passed in 1912 in Italy to provide evidence on these questions. The reform trebled the electorate (from slightly less than three million to 8.650.000) leaving electoral rules and district boundaries unchanged. By exploiting differences in enfranchisement rates across electoral districts we identify the effect of franchise extension on various political outcomes. Enfranchisement increased the vote share of left-wing social reformers but had no impact on their parliamentary representation, no impact on parliamentary representation of aristocracy and traditional elites and no effect on political competition. We show that left-wing parties decreased their vote shares and were systematically defeated in key swing districts. We document elite's effort to minimize the political impact of the reform and, in particular, we show that the Vatican's secret involvement in the post-reform electoral campaign had a substantial impact on voting results, although formerly and newly enfranchised voters were equally affected. We relate our results to economic theories of democratization, which appear to be only partially compatible with our evidence.
exam performance affects their future exam performance. Our identification strategy exploits a natural experiment in a leading UK university where different departments have historically
different rules on the provision of feedback to their students. We find the provision of feedback has a positive effect on students' subsequent test scores: the mean impact corresponds to 13% of a standard deviation in test scores. The impact of feedback is stronger for more able students and for students who have less information to start with about the academic environment, while no subset of individuals is found to be discouraged by feedback. Our findings suggest that students have imperfect information on how their effort translates into test scores and that the provision of feedback might be a cost effective means to increase students' exam performance.
Experimental evidence suggests that agents in social dilemmas have belief-dependent, otherregarding preferences. But in experimental games such preferences cannot be common knowledge, because subjects play with anonymous co-players. We address this issue theoretically and experimentally in the context of a trust game, assuming that the trustee's choice may be affected by a combination of guilt aversion and intention-based reciprocity. We recover trustees' belief-dependent preferences from their answers to a structured questionnaire. In the main treatment, the answers are disclosed and made common knowledge within each matched pair, while in the control treatment there is no disclosure. Our main auxiliary assumption is that such disclosure approximately implements a psychological game with complete information. To organize the data, we classify subjects according to their elicited preferences, and test predictions for the two treatments using both rationalizability and equilibrium. We find that guilt aversion is the prevalent psychological motivation, and that behavior and elicited beliefs move in the direction predicted by the theory.
This is particularly true for the investment cost friction and habit persistence: when low
frequencies are present in the estimation, the investment cost friction and habit persistence are estimated to be higher than when low frequencies are absent.
We study a Mean-Risk model derived from a behavioral theory of Disappointment with multiple reference points. One distinguishing feature of the risk measure is that it is based on mutual deviations of outcomes, not deviations from a specific target. We prove necessary and sufficient conditions for strict first and second order stochastic dominance, and show that the model is, in addition, a Convex Risk Measure. The model allows for richer, and behaviorally more plausible, risk preference patterns than competing models with equal degrees of freedom, including Expected Utility (EU), Mean-Variance (MV), Mean-Gini (MG), and models based on non-additive probability weighting, such a Dual Theory (DT). For example, in asset allocation, the decision-maker can abstain from diversifying in a risky asset unless it meets a threshold performance, and gradually invest beyond this threshold, which appears more acceptable than the extreme solutions provided by either EU and MV (always diversify) or DT and MG (always plunge). In asset trading, the model allows no-trade intervals, like DT and MG, in some, but not all, situations. An illustrative application to portfolio selection is presented. The model can provide an improved criterion for Mean-Risk analysis by injecting a new level of behavioral realism and flexibility, while maintaining key normative properties.
However, this result hides strong heterogenous effects: high educated non-mothers are persuaded by the informational treatments to increase their intended use of formal child care (and to pay more); whereas low educated non-mothers to reduce their intended labor supply. These findings are consistent with women responding to monetary incentive and/or having different preferences for maternal care. These heterogenous responses across women send a warning signal about the true effectiveness - in terms of take up rates - of often advocated public policies regarding formal child care.
Subject classifications: Utility/preference: Estimation. Decision analysis: Risk.
Area of review: Decision Analysis.
on an individual's preference structure. We test the approach via an experiment in a riskless context in which subjects are asked to evaluate mobile phone packages that differ on three attributes.
responsible for. In our experiment an agent chooses between a lottery and a safe asset; payment from the chosen option goes to a principal who then decides how much to allocate between the agent and a third party. We observe widespread blame: regardless of their choice, agents are blamed by principals for the outcome of the lottery, an event they are not responsible for. We provide an explanation of this apparently irrational behavior with a delegated-expertise principal-agent model, the subjects' salient perturbation of the environment.
uctuations are explained by shocks unrelated to technology.
education on religiosity and women's empowerment. A new law implemented in 1998 resulted in individuals born after a specific date to be more likely to complete at least 8 years of schooling while those born earlier could drop out after 5 years. This allows the implementation of a Regression Discontinuity (RD) Design and the estimation of meaningful causal estimates of schooling. Using the 2008 Turkish Demographic Health Survey, we show that the reform resulted in a one-year increase in years of schooling among women on average. Over a period of ten years, this education increase resulted in women reporting lower levels of religiosity, greater decision rights over marriage and higher household consumption (of durables). We find that these effects work through different channels, depending on women's family background. For women whose mothers had no formal education, the reform resulted in them only finishing the compulsory schooling and having higher labor force participation. For women whose mothers had some formal education, the reform had persistent effects beyond compulsory schooling, and these women were subsequently married to more educated (and possibly wealthier) husbands but remained outside the labor force. We interpret these findings as evidence that education may empower women across a wide spectrum of a Muslim society, yet, depending on pre-reform constraints to participation, its effects may not be strong enough to fully overcome participation constraints (in education or the labor force).
uctuations make domestic assets a good hedge against labor income risk. Evidence from developed economies in recent years is qualitatively and quantitatively consistent with the mechanisms highlighted by the theory.
-
-
-
-
-
-
-
JEL codes: G10, G18, D81.
-
-
-
-
-
-
-
-
-
-
-
-
-
All robust equilibria of plurality voting games satisfy Duverger's Law: In any robust equilibrium, exactly two candidates receive a positive number of votes. Moreover, robust- ness (only) rules out a victory of the Condorcet loser.
All robust equilibria under runoff rule satisfy Duverger's Hypothesis: First round votes vare (almost always) dispersed over more than two alternatives. Robustness has strong implications for equilibrium outcomes under runoff rule: For large parts of the parameter space, the robust equilibrium outcome is unique.
-
-
-
'Crises feed uncertainty. And uncertainty affects behaviour, which feeds the crisis.'
Olivier Blanchard, The Economist, January 29, 2009
-
-
-
2000 Mathematics Subject Classification: Primary 28A12, 28A25, 46G12; Secondary 91B06
-
Data on people's subjective expectations of returns as well as on their schooling decisions allow me to directly estimate and compare cost distributions of poor and rich individuals. I find that poor individuals require significantly higher expected returns to be induced to attend college, implying that they face higher costs than individuals with wealthy parents. I then test predictions of a model of college attendance choice in the presence of credit constraints, using parental income and wealth as a proxy for the household's (unobserved) interest rate. I find that poor individuals with high expected returns are particularly responsive to changes in direct costs, which is consistent with credit constraints playing an important role. Evaluating potential welfare implications by applying the Local Instrumental Variables approach of Heckman and Vytlacil (2005) to my model, I find that a sizeable fraction of poor individuals would change their decision in response to a reduction in direct costs. Individuals at the margin have expected returns that are as high or higher than the individuals already attending college, suggesting that government policies such as fellowship programs could lead to large welfare gains.
Furthermore, we show how the extremal transfers can be put to use in mechanism design problems where Revenue Equivalence does not hold. To this end we rst explore the role of extremal transfers when the agents with type dependent outside options are free to participate in the mechanism. Finally, we consider the question of budget balanced implementation. We show that an allocation rule can be implemented in an incentive compatible, individually rational and ex post budget balanced mechanism if and only if there exists an individually rational extremal transfer scheme that delivers an ex ante budget surplus.
-
facilitated the introduction of structural reforms, defined as deregulation
in the product markets and liberalization and deregulation in the labor
markets. After reviewing the theoretical arguments that may link the
adoption of the Euro and structural reforms, we investigate the empirical
evidence. We find that the adoption of the Euro has been associated with
an acceleration of the pace of structural reforms in the product market.
The adoption of the Euro does not seem to have accelerated labor market
reforms in the "primary labor market;" however, the run up to the Euro
adoption seems to have been accompanied by wage moderation. We also
investigate issues concerning the sequencing of goods and labor market
reforms.
two alternative theories - children as consumption vs. investment good. We use
as a natural experiment the Italian pension reforms of the 90s that introduced a clear
discontinuity in the treatment across workers. This policy experiment is particularly
well suited, since the consumption motive predicts lower future pensions to reduce
fertility, while the old-age security to increase it. Our empirical analysis identifies
a clear and robust positive effect of less generous future pensions on post-reform
fertility. These findings are consistent with old-age security even for contemporary
fertility.
ex-ante optimality requires intergenerational risk sharing. We compare the level
of time-consistent intergenerational risk sharing chosen by a social planner and by office
seeking politicians. In the political setting, the transfer of resources across generations
- a PAYG pension system - is determined as a Markov equilibrium of a probabilistic
voting game. Negative shocks represented by low realized returns on the risky asset
induce politicians to compensate the old through a PAYG system. Unless the young are
crucial to win the election, this political system generates more intergenerational risk
sharing than the (time consistent) social optimum. In particular, these transfers are
more persistent and less responsive to the realization of the shock than optimal. This is
because politicians anticipate their current transfers to the elderly to be compensated
through offsetting transfers by future politicians, and thus have an incentive to overspend.
Perhaps surprisingly, aging increases the socially optimal transfer but makes
politicians less likely to overspend, by making it more costly for future politicians to
compensate the current young.
We develop and estimate a medium scale macroeconomic model that allows for unemployment
and staggered nominal wage contracting. In contrast to most existing quantitative models,
employment adjustment is on the extensive margain and the employment of existing workers is
efficient. Wage rigidity, however, affects the hiring of new workers. The former is introduced
via the staggered Nash bargaing setup of Gertler and Trigari (2006). A robust finding is that
the model with wage rigidity provides a better description of the data than does a flexible wage
version. Overall, the model fits the data roughly as well as existing quantitative macroeconomic
models, such as Smets and Wouters (2007) or Christiano, Eichenbaum and Evans (2005). More
work is necessary, however, to ensure a robust identification of the key labor market parameters.
and productivity growth in the Italian manufacturing industries in 1995-2003.
Our results indicate that the off-shoring of intermediates within the same
industry (narrow off-shoring) is beneficial for productivity growth, while
the off-shoring of services is not. We also find that the way in which off-
shoring is measured may matter considerably. The positive relation between off-
shoring of intermediates and productivity growth is there with our direct
measures based on input-output data but disappears when either a broad measure
or the Feenstra-Hanson off-shoring measure employed in other studies are used
instead.
their preferences concerning an irreversible social decision. Voters can either implement
the project in the first period, or they can postpone the decision to the
second period. We analyze the effects of different majority rules. Individual first
period voting behavior may become "less conservative" under supermajority rules,
and it is even possible that a project is implemented in the first period under a
supermajority rule that would not be implemented under simple majority rule.
We characterize the optimal majority rule, which is a supermajority rule. In
contrast to individual investment problems, society may be better off if the option
to postpone the decision did not exist. These results are qualitatively robust to
natural generalizations of our model.
If successful, the innovative effort allows to take new actions that may be ex-post wel-
fare enhancing (legal) or decreasing (illegal). Deterrence in this setting works by affecting
the incentives to invest in innovation (average deterrence). Type-I errors, through over-
enforcement, discourage innovative effort while type-II errors (under-enforcement) spur it.
The ex-ante expected welfare effect of innovations shapes the optimal policy design. When
innovations are ex-ante welfare improving, laissez-faire is chosen. When innovations are
instead welfare decreasing, law enforcement should limit them through average deterrence.
We consider several policy environments differing in the instruments available. Enforcement
effort is always positive and fines are (weakly) increasing in the social loss of innovations. In
some cases accuracy is not implemented, contrary to the traditional model where it always
enhances (marginal) deterrence, while in others it is improved selectively only on type-II
errors (asymmetric protocols of investigation).
cointegration and dynamic factor models. It introduces the Factor-augmented Error
Correction Model (FECM), where the factors estimated from a large set of variables in levels
are jointly modelled with a few key economic variables of interest. With respect to the standard
ECM, the FECM protects, at least in part, from omitted variable bias and the dependence of
cointegration analysis on the specific limited set of variables under analysis. It may also be in
some cases a refinement of the standard Dynamic Factor Model (DFM), since it allows us to
include the error correction terms into the equations, and by allowing for cointegration prevent
the errors from being non-invertible moving average processes. In addition, the FECM is a
natural generalization of factor augmented VARs (FAVAR) considered by Bernanke, Boivin and
Eliasz (2005) inter alia, which are specified in first differences and are therefore misspecified in
the presence of cointegration. The FECM has a vast range of applicability. A set of Monte Carlo
experiments and two detailed empirical examples highlight its merits in finite samples relative to
standard ECM and FAVAR models. The analysis is conducted primarily within an in-sample
framework, although the out-of-sample implications are also explored.
diffusion index-based methods in short samples with structural change. We
consider several data generation processes, to mimic different types of
structural change, and compare the relative forecasting performance of factor
models and more traditional time series methods. We find that changes in the
loading structure of the factors into the variables of interest are extremely
important in determining the performance of factor models. We complement
the analysis with an empirical evaluation of forecasts for the key
macroeconomic variables of the Euro area and Slovenia, for which relatively
short samples are officially available and structural changes are likely. The
results are coherent with the findings of the simulation exercise, and confirm
the relatively good performance of factor-based forecasts also in short samples
with structural change.
models that can handle unbalanced datasets. Due to the different release lags of business cycle
indicators, data unbalancedness often emerges at the end of multivariate samples, which is some-
times referred to as the 'ragged edge' of the data. Using a large monthly dataset of the German
economy, we compare the performance of different factor models in the presence of the ragged edge:
static and dynamic principal components based on realigned data, the Expectation-Maximisation
(EM) algorithm and the Kalman smoother in a state-space model context. The monthly factors
are used to estimate current quarter GDP, called the 'nowcast', using different versions of what
we call factor-based mixed-data sampling (Factor-MIDAS) approaches. We compare all possible
combinations of factor estimation methods and Factor-MIDAS projections with respect to now-
cast performance. Additionally, we compare the performance of the nowcast factor models with
the performance of quarterly factor models based on time-aggregated and thus balanced data,
which neglect the most timely observations of business cycle indicators at the end of the sample.
Our empirical findings show that the factor estimation methods don't differ much with respect
to nowcasting accuracy. Concerning the projections, the most parsimonious MIDAS projection
performs best overall. Finally, quarterly models are in general outperformed by the nowcast factor
models that can exploit ragged-edge data.
work of the standard neoclassical growth model. The short-run revenue loss after an in-
come tax cut is partly -- or, depending on parameter values, even completely -- offset
by growth in the long-run, due to the resulting incentives to further accumulate capital.
We study how the dynamic response of government revenue to a tax cut changes if we
allow a Ramsey economy to engage in international trade: the open economy's ability to
reallocate resources between labor-intensive and capital-intensive industries reduces the
negative effect of factor accumulation on factor returns, thus encouraging the economy to
accumulate more than it would do under autarky. We explore the quantitative implica-
tions of this intuition for the US in terms of two issues recently treated in the literature:
dynamic scoring and the Laffer curve. Our results demonstrate that international trade
enhances the response of government revenue to tax cuts by a relevant amount. In our
benchmark calibration, a reduction in the capital-income tax rate has virtually no effect
on government revenue in steady state.
variable most directly related to current and expected monetary policy,
the yield on long term government bonds. We find that the level of longterm
rates in Europe is almost entirely explained by U.S. shocks and by
the systematic response of U.S. and European variables (inflation, short
term rates and the output gap) to these shocks. Our results suggest in
particular that U.S. variables are more important than local variables
in the policy rule followed by European monetary authorities: this was
true for the Bundesbank before EMU and has remained true for the
ECB, at least so far. Using closed economy models to analyze monetary
policy in the Euro is thus inconsistent with the empirical evidence on the
determinants of Euro area long-term rates. It is also inconsistent with
the way the Governing Council of the ECB appears to make actual policy
decisions.
the functioning of current institutions? This paper argues that individual
values and convictions about the scope of application of norms
of good conduct provide the "missing link". Evidence from a variety
of sources points to two main findings. First, individual values consistent
with generalized (as opposed to limited) morality are widespread
in societies that were ruled by non-despotic political institutions in
the distant past. Second, well functioning institutions are often observed
in countries or regions where individual values are consistent
with generalized morality, and under different identifying assumptions
this suggests a causal effect from values to institutional outcomes. The
paper ends with a discussion of the implications for future research.
the evolution of models estimated to evaluate the macroeconomic impact of the
effect of monetary policy . We argue that the main challenge for the
econometrics of monetary policy is the combination of theoretical models and
information from the data to construct empirical models. The failure of the
large econometrics models at the beginning of the 1970s might be explained by
their incapability of taking proper account of both these aspects. The great
critiques by Lucas and Sims have generated an alternative approach which, at
least initially, has been almost entirely dominated by theory. The LSE
approach has instead concentrated on the properties of the statistical models
and on the best way of incorporating information from the data into the
empirical models, paying little attention to the economic foundation of the
adopted specification. The realization that the solution of a DSGE model can
be approximated by a restricted VAR, which is also a statistical model, has
generated a potential link between the two approaches. The open question is
which type of VARs are most appropriate for the econometric analysis of
monetary policy.
This paper studies a theoretical model where individuals respond
to incentives but are also influenced by norms of good conduct inherited
from earlier generations. Parents rationally choose what values to
transmit to their offspring, and this choice is influenced by the quality
of external enforcement and the pattern of likely future transactions.
The equilibrium displays strategic complementarities between values
and current behavior, which reinforce the effects of changes in the
external environment. Values evolve gradually over time, and if the
quality of external enforcement is chosen under majority rule, there is
histeresis: adverse initial conditions may lead to a unique equilibrium
path where external enforcement remains weak and individual values
discourage cooperation.
This paper reconsiders the developments of model evaluation in macroeconometrics over the last forty years. Our analysis starts from the failure of early empirical macroeconomic models caused by stagflation in the seventies. The different diagnosis of this failure are then analyzed to classify them in two groups: explanations related to problems in the theoretical models that lead to problems in the identification of the relevant econometric model and explanations related to problems in the underlying statistical model that lead to misspecification of the relevant econometric model. Developments in macroeconometric model evaluation after the failure of the Cowles foundation models are then discussed to illustrate how the different critiques have initiated different approaches in macroeconometrics. The evolution of what has been considered the consensus approach to macroeconometric model evaluation over the last thirty years is then followed. The criticism moved to Cowles foundation models in the early seventies might apply almost exactly to DSGE-VAR model evaluation in the first decade of
the new millenium. However, the combination of general statistical model, such as a Factor Augmented VAR, with a DSGE model seems to produce forecasts that perform better than those based exclusively on the theoretical and on the statistical model.
of Uganda began to publish newspaper ads on the timing and amount of funds
disbursed to the districts. The intent of the campaign was to boost schools' and
parents' ability to monitor the local officials in charge of disbursing funds to the
schools. The mass information campaign was successful. But since newspaper
penetration varies greatly across districts, the exposure to information about the
program, and thus funding, dier across districts. I use this variation in program
exposure between districts to evaluate whether public funds have an effect on
student performance. The results show that money matters: On average, stu-
dents in districts highly exposed to the information campaign, and hence to the
grant program, scored 0.40 standard deviations better in the Primary Leaving
Exam (PLE) than students in districts less exposed to information. The results
are robust to controlling for a broad range of confounding factors.
sidered attractive by the profession not only from the theoretical perspec-
tive but also from an empirical standpoint. As a consequence of this
development, methods for diagnosing the fit of these models are being
proposed and implemented. In this article we illustrate how the concept
of statistical identification, that was introduced and used by Spanos(1990)
to criticize traditional evaluation methods of Cowles Commission models,
could be relevant for DSGE models. We conclude that the recently pro-
posed model evaluation method, based on the DSGE - VAR(λ), might not satisfy
the condition for statistical identification. However, our appli-
cation also shows that the adoption of a FAVAR as a statistically identified
benchmark leaves unaltered the support of the data for the DSGE model
and that a DSGE-FAVAR can be an optimal forecasting model.
bonds in the Euro area. There is a common trend in yield differentials, which
is correlated with a measure of aggregate risk. In contrast, liquidity differentials
display sizeable heterogeneity and no common factor. We propose a simple model
with endogenous liquidity demand, where a bond's liquidity premium depends both
on its transaction cost and on investment opportunities. The model predicts that
yield differentials should increase in both liquidity and risk, with an interaction
term of the opposite sign. Testing these predictions on daily data, we find that
the aggregate risk factor is consistently priced, liquidity differentials are priced for
a subset of countries, and their interaction with the risk factor is in line with the
model's prediction and crucial to detect their effect.
We estimate the effect of political regime transitions on growth with semi-parametric methods, combining difference in differences with
matching, that have not been used in macroeconomic settings. Our semi-parametric estimates suggest that previous parametric estimates
may have seriously underestimated the growth effects of democracy. In particular, we find an average negative effect on growth of leav-
ing democracy on the order of -2 percentage points implying effects on income per capita as large as 45 percent over the 1960-2000 panel.
Heterogenous characteristics of reforming and non-reforming countries appear to play an important role in driving these results.
nology is biased in favor of a country's abundant production factors. We provide an expla-
nation to this finding based on the Heckscher-Ohlin model. Countries trade and specialize
in the industries that use intensively the production factors they are abundantly endowed
with. For given factor endowment ratios, this implies smaller international differences in
factor price ratios than under autarky. Thus, when measuring the factor bias of technol-
ogy with the same aggregate production function for all countries, they appear to have
an abundant-factor bias in their technologies.
factor techniques, to produce composite coincident indices (CCIs) at the sectoral
level for the European countries and for Europe as a whole. Few CCIs are available
for Europe compared to the US, and most of them use macroeconomic variables and
focus on aggregate activity. However, there are often delays in the release of macroeconomic
data, later revisions, and differences in the definition of the variables across
countries, while the surveys are timely available, not subject to revision, and fully comparable
across countries. Moreover, there are substantial discrepancies in activity at
the sectoral level, which justifies the interest in a sectoral disaggregation. Compared
to the Confidence Indicators produced by the European Commission, which are based
on a simple average of the aggregate survey answers, we show that factor based CCIs,
using survey answers at a more disaggregate level, produce higher correlation with the
reference series for the majority of sectors and countries.
but also for the decisions of private agents, consumers and firms. Since it is difficult
to identify a single variable that provides a good measure of current economic
conditions, it can be preferable to consider a combination of several coincident indicators,
i.e., a composite coincident index (CCI). In this paper, we review the main
statistical techniques for the construction of CCIs, propose a new pooling-based
method, and apply the alternative techniques for constructing CCIs for the largest
European countries in the euro area and for the euro area as a whole. We find that
different statistical techniques yield comparable CCIs, so that it is possible to reach
a consensus on the status of the economy.
We provide a unified state-space modelling framework that encom-
passes different existing discrete-time yield curve models. within such
framework we analyze the impact on forecasting performance of two
crucial modelling choices, i.e. the imposition of no-arbitrage restric-
tions and the size of the information set used to extract factors. Using
US yield curve data, we find that: a. macro factors are very useful in
forecasting at medium/long forecasting horizon; b. financial factors
are useful in short run forecasting; c. no-arbitrage models are effec-
tive in shrinking the dimensionality of the parameter space and, when
supplemented with additional macro information, are very effective in
forecasting; d. within no-arbitrage models, assuming time-varying risk
price is more favorable than assuming constant risk price for medium
horizon-maturity forecast when yield factors dominate the informa-
tion set, and for short horizon and long maturity forecast when macro
factors dominate the information set; e. however, given the complex-
ity and the highly non-linear parameterization of no-arbitrage models,
it is very difficult to exploit within this type of models the additional
information offered by large macroeconomic datasets.
a common weakness: taxes, government spending and interest rates
are assumed to respond to various macroeconomic variables but not
to the level of the public debt; moreover the impact of fiscal shocks
on the dynamics of the debt-to-GDP ratio are not tracked. We ana-
lyze the effects of fiscal shocks allowing for a direct response of taxes,
government spending and the cost of debt service to the level of the
public debt. We show that omitting such a feedback can result in
incorrect estimates of the dynamic effects of fiscal shocks. In par-
ticular the absence of an effect of fiscal shocks on long-term interest
rates-a frequent finding in research based on VAR's that omit a debt
feedback-can be explained by their mis-specification, especially over
samples in which the debt dynamics appears to be unstable. Using
data for the U.S. economy and the identification assumption proposed
by Blanchard and Perotti (2002) we reconsider the effects of fiscal
policy shocks correcting for these shortcomings.
U.K. Compared to the closed economy, the presence of an exchange rate channel for
monetary policy not only produces new trade-offs for monetary policy, but it also
introduces an additional source of specification errors. We find that exchange rate
shocks are an important contributor to volatility in the model, and that the exchange
rate equation is particularly vulnerable to model misspecification, along with the
equation for domestic inflation. However, when policy is set with discretion, the
cost of insuring against model misspecification appears reasonably small.
systems will have to be retrenched. In particular, retirement age will have to be largely
increased. Yet, is this policy measure feasible in OECD countries? Since the answer
belongs mainly to the realm of politics, I evaluate the political feasibility of postponing
retirement under aging in France, Italy, the UK, and the US. Simulations for the year
2050 steady state demographic, economic and political scenario suggest that retirement
age will be postponed in all countries, while the social security contribution rate will
rise in all countries, but Italy. The political support for increasing the retirement age
stems mainly from the negative income effect induced by aging, which reduces the
profitability of the existing social security system, and thus the individuals net social
security wealth.
action or seek a profitable innovation that may enhance or reduce welfare. The legislator
sets fines calibrated to the harmfulness of unlawful actions. The range of fines defines norm
flexibility. Expected sanctions guide firms' choices among unlawful actions (marginal deter-
rence) and/or stunt their initiative altogether (average deterrence). With loyal enforcers,
maximum norm flexibility is optimal, so as to exploit both marginal and average deterrence.
With corrupt enforcers, instead, the legislator should prefer more rigid norms that prevent
bribery and misreporting, at the cost of reducing marginal deterrence and stunting private
initiative. The greater is potential corruption, the more rigid the optimal norms.
als, which we call generalized fractionalization index, that uses information on similarities
among individuals. We show that the generalized index is a natural extension of the
widely used ethno-linguistic fractionalization index and is alsosimple tocompute. The
paper offers some empirical illustrations on how the new index can be operationalized and
what difference it makes as compared to standard indices. These applications pertain to
the pattern of diversity in the United States across states. Journal of Economic Literature
fiscal policy by considering the Italian case. Empirical analysis has been so
far rather inconclusive on this important topic. We abscribe such evidence
to three problems: identification, regime-switching and maturity effects. All
these aspects are particularly relevant to the Italian case.
We propose a parsimonious model with three factors to
represent the whole yield curve, and we consider yield
differentials between Italian and German Government bonds.
To take into account the possibility of regime-switching, we explicitly include
a hidden two-state Markov chain that represents market expectations. The
model is estimated using Bayesian econometric techniques. We find that government
debt and its evolution significantly influence the yield of government
bonds, that such effects are maturity dependent and regime-dependent. Hence
when investigating the effect of fiscal policy on the term-structure it is of crucial
importance to allow for multiple regimes in the estimation.
Kewords: Fiscal Policy, Term Structure, regime switching, Bayesian estimation
utility from decision problems under exogenous uncertainty to choice in strategic
environments. Interactive uncertainty is modeled both explicitly - using
hierarchies of preference relations, the analogue of beliefs hierarchies
implicitly - using preference structures, the analogue of type spaces la
Harsanyi - and it is shown that the two approaches are equivalent.
Preference structures can be seen as those sets of hierarchies arising when certain
restrictions on preferences, along with the players' common certainty of
the restrictions, are imposed. Preferences are a priori assumed to satisfy only
very mild properties (reflexivity, transitivity, and monotone continuity).
Thus, the results provide a framework for the analysis of behavior in games
under essentially any axiomatic structure. An explicit characterization is
given for Savage's axioms, and it is shown that a hierarchy of relatively
simple preference relations uniquely identifies the decision maker's
utilities and beliefs of all orders. Connections with the literature on beliefs
hierarchies and correlated equilibria are discussed.
Kewords: Subjective probability, Preference hierarchies, Type spaces, Beliefs
hierarchies, Common belief, Expected utility, Incomplete information,
Correlated equilibria
on sellers' investment. We show that a retailer extracts a larger
surplus from the negotiation with an upstream manufacturer the
more it is essential to the creation of total surplus. In turn, this
depends on the rivalry between retailers in the bargaining process.
Rivalry increases when the retail market is more fragmented, when
the retailers are less differentiated and when decreasing returns to
scale in production are larger. The allocation of total surplus affects
also the incentives of producers to invest in product quality, an instance
of the hold up problem. This not only makes both the supplier and
consumers worse off, but it may harm also the retailers.
Kewords: Retailers' power, Hold-up, Supplier's under-investment
Our paper seeks to answer this question by providing evidence on the
age-productivity and age-earnings profiles for a sample of plants in three
manufacturing industries (forest, industrial machinery and electronics) in
Finland. Our main result is that exposure to rapid technological and managerial
changes does make a difference for plant productivity, less so for wages. In
electronics, the Finnish industry undergoing a major technological and
managerial shock in the 1990s, the response of productivity to age-related
variables is first sizably positive and then becomes sizably negative as one
looks at plants with higher average seniority and experience. This declining
part of the curve is not there either for the forest industry or for industrial
machinery. It is not there either for wages in electronics. These conclusions
survive when a host of other plausible productivity determinants (notably,
education and plant vintage) are included in the analysis. We conclude that
workforce aging may be a burden for firms in high-tech industries and less so in
other industries.
We study the joint dynamics of economic and political change. Predictions of the simple model that we formulate in the paper get
considerable support in a panel of data on political regimes and GDP per capita for about 150 countries over 150 years. Democratic cap-
ital - measured by a nation's historical experience with democracy and by the incidence of democracy in its neighborhood - reduces the
exit rate from democracy and raises the exit rate from autocracy. In democracies, a higher stock of democratic capital stimulates growth
in an indirect way by decreasing the probability of a sucessful coup. Our results suggest a virtuous circle, where the accumulation of phys-
ical and democratic capital reinforce each other, promoting economic development jointly with the consolidation of democracy.
model misspecification. The principal tools used to solve robust control problems
are state-space methods (see Hansen and Sargent, 2006, and Giordani and
Soderlind, 2004). In this paper we show that the structural-form methods
developed by Dennis (2006) to solve control problems with rational expectations
can also be applied to robust control problems, with the advantage that they
bypass the task, often onerous, of having to express the reference model in
statespace form. Interestingly, because state-space forms and structural forms
are not unique the two approaches do not necessarily return the same equilibria
for robust control problems. We apply both state-space and structural solution
methods to an empirical New Keynesian business cycle model and find that the
differences between the methods are both qualitatively and quantitatively important.
In particular, with the structural-form solution methods the specification errors generally
involve changes to the conditional variances in addition to theconditional means of the
shock processes.
is attracting considerable attention. In this paper we briefly review the underlying
theory and then compare the impulse response functions resulting from two alternative
estimation methods for the DFM. Finally, as an example, we reconsider the issue of
the identification of the driving forces of the US economy, using data for about 150
macroeconomic variables.
considerable attention recently, due to the increased availability of large datasets. In
this paper we propose a new parametric methodology for estimating factors from large
datasets based on state space models and discuss its theoretical properties. In particular,
we show that it is possible to estimate consistently the factor space. We also
develop a consistent information criterion for the determination of the number of factors
to be included in the model. Finally, we conduct a set of simulation experiments
that show that our approach compares well with existing alternatives.
integrates labor market search and matching into an otherwise
standard New Keynesian model. I allow for changes of the labor
input at both the extensive and the intensive margin and develop
two alternative specifications of the bargaining process. Under
efficient bargaining (EB) hours are determined jointly by the firm
and the worker as a part of the same Nash bargain that determines
wages. With right to manage (RTM), instead, firms retain the right to
set hours of work unilaterally. I show that introducing search and
matching frictions affects the cyclical behavior of real marginal costs
by way of two different channels: a wage channel under RTM and an
extensive margin channel under EB. In both cases, the presence of
search and matching frictions may cause a lower elasticity of marginal
costs with respect to output and thus help to account for the observed
inertia in inflation.
parameter estimation and model evaluation whenthe objective function measures
the distance between estimated and model impulse responses. We show that
observational equivalence, partial and weak identification problems are widespread,that
they lead to biased estimates, unreliable t-statistics and may induce investigators to
select false models. We examine whether different objective functions affect identification
and study how small samples interact with parameters and shock identification.
We provide diagnostics and tests to detect identification failures and apply them to a
state-of-the-art model.
attempts to addresses this question that exploited within-country variation.
It shows that the answer is largely positive, but also depends on the details
of democratic reforms. First, the sequence of economic vs political reforms
matters: countries liberalizing their economy before extending political rights
do better. Second, different forms of democratic government lead to different
economic policies, and this might explain why presidential democracy leads
to faster growth than parliamentary democracy. Third, it is important to distinguish
between expected and actual political reforms. Taking expectations of regime
change into account helps identify a stronger growth effect of democracy.
The Italian economy is often said to be on a declining path. In this paper, we document that:
(i) Italy�s current decline is a labor productivity problem (ii) the labor productivity slowdown
stems from declining productivity growth in all industries but utilities (with manufacturing
contributing for about one half of the reduction) and diminished inter-industry reallocation of
workers from agriculture to market services; (iii) the labor productivity slowdown has been
mostly driven by declining TFP, with roughly unchanged capital deepening. The only mild
decline of capital deepening is due to the rise in the value added share of capital that
counteracted declining capital accumulation.
We lay out a tractable model for fiscal and monetary policy analysis in
a currency union, and analyze its implications for the optimal design of such
policies. Monetary policy is conducted by a common central bank, which sets
the interest rate for the union as a whole. Fiscal policy is implemented at
the country level, through the choice of government spending level. The model
incorporates country-specific shocks and nominal rigidities. Under our assumptions,
the optimal monetary policy requires that inflation be stabilized at the
union level. On the other hand, the relinquishment of an independent monetary
policy, coupled with nominal price rigidities, generates a stabilization role
for fiscal policy, one beyond the efficient provision of public goods. Interestingly,
the stabilizing role for fiscal policy is shown to be desirable not only from
the viewpoint of each individual country, but also from that of the union as
a whole. In addition, our paper offers some insights on two aspects of policy
design in currency unions: the conditions for equilibrium determinacy and
the effects of exogenous government spending variations.
Pooling forecasts obtained from different procedures typically reduces
the mean square forecast error and more generally improves the quality
of the forecast. In this paper we evaluate whether pooling interpolated
or backdated time series obtained from different procedures can also
improve the quality of the generated data. Both simulation results
and empirical analyses with macroeconomic time series indicate that
pooling plays a positive and important role also in this context.
In this paper we assess the possibility of producing unbiased forecasts for fiscal variables in the
euro area by comparing a set of procedures that rely on different information sets and
econometric techniques. In particular, we consider ARMA models, VARs, small scale semi-
structural models at the national and euro area level, institutional forecasts (OECD), and
pooling. Our small scale models are characterized by the joint modelling of fiscal and monetary
policy using simple rules, combined with equations for the evolution of all the relevant
fundamentals for the Maastricht Treaty and the Stability and Growth Pact. We rank models on
the basis of their forecasting performance using the mean square and mean absolute error
criteria at different horizons. Overall, simple time series methods and pooling work well and are
able to deliver unbiased forecasts, or slightly upward biased forecast for the debt-GDP
dynamics. This result is mostly due to the short sample available, the robustness of simple
methods to structural breaks, and to the difficulty of modelling the joint behaviour of several
variables in a period of substantial institutional and economic changes. A bootstrap experiment
highlights that, even when the data are generated using the estimated small scale multi
country model, simple time series models can produce more accurate forecasts, due to
their parsimonious specification.
Many countries, especially developing ones, follow procyclical fiscal polices, namely spending goes up (taxes go down) in booms and spending goes down (taxes go up) in recessions. We provide an explanation for this suboptimal fiscal policy based upon political distortions and incentives for less-than-benevolent government to appropriate rents. Voters have incentives similar to the starving the Leviathan classic
argument, and demand more public goods or fewer taxes to prevent governments from appropriating rents when the economy is doing well.
We test this argument against more traditional explanations based purely on borrowing constraints, with a reasonable amount of success.
Do countries gain by coordinating their monetary policies if they have different economic structures? We address this issue in the context of a new open-economy macro model with a traded and a non-traded sector and more importantly, with a across-country asymmetry in the size of the traded sector. We study optimal monetary policy under independent and cooperating central banks, based on analytical expressions for welfare objectives derived from quadratic approximations to individual preferences. In the presence of asymmetric structures, a new source of gains from coordination emerges due to a terms-of-trade externality. This externality affects unfavorably the country that is more exposed to trade and its effects tend
to be overlooked when national central banks act independently. The welfare gains from coordination are sizable and increase with the degree of asymmetry across countries and the degree of openness, and decrease with the within-country correlation of sectoral shocks.
We study whether fiscal restrictions affect volatilities and correlations of macrovariables
and the probability of excessive debt for a sample of 48 US states. Fiscal constraints are
characterized with a number of indicators and volatility and correlations are computed in several
ways. The second moments of macroeconomic variables in states with different fiscal constraints
are economically and statistically similar. Excessive debt and the mechanism linking budget
deficit and excessive debts are independent of whether tight or loose fiscal constraints are in
place. Creative budget accounting may account for the results.
We study how constrained fiscal policy can affect regional inflation and output in a two-region model of a monetary union with sticky prices and distortionary taxation. Both government expenditure and taxes can be used to stabilize regional variables; however, the best welfare outcome is obtained under constant taxes and constant regional inflations. With cooperation debt and deficit constraints reduce regional inflation variability, but the path of output is suboptimal. Under non-cooperation the opposite occurs due to a trade-off between taxation and inflation variability. Decentralized rules, rather than constraints, stabilize regional inflation and output. They imply more fiscal action for smaller union members.
We study the mechanics of transmission of fiscal shocks to labor markets. We
characterize a set of robust implications following government consumption, investment
and employment shocks in a RBC and a New-Keynesian model and use part of them to
identify shocks in the data. In line with the New-Keynesian story, shocks to government
consumption and investment increase real wages and employment contemporaneously
both in US aggregate and in US state data. The dynamics in response to employment
shocks are mixed, but in many cases are inconsistent with the predictions of the RBC
model.
Does culture have a causal effect on economic development? The data on European
regions suggest that it does. Culture is measured by indicators of individual values
and beliefs, such as trust and respect for others, and confidence in individual selfdetermination.
To isolate the exogenous variation in culture, I rely on two historical
variables used as instruments: the literacy rate at the end of the XIXth century, and
the political institutions in place over the past several centuries. The political and
social history of Europe provides a rich source of variation in these two variables at a
regional level. The exogenous component of culture due to history is strongly
correlated with current regional economic development, after controlling for
contemporaneous education, urbanization rates around 1850 and national effects.
Moreover, the data do not reject the over-identifying assumption that the two
historical variables used as instruments only influence regional development through
culture. The indicators of culture used in this paper are also strongly correlated with
economic development and with available measures of institutions in a cross-country
setting.
Consumption is striking back. Some recent evidence indicates that
the well-known asset pricing puzzles generated by the difficulties of
matching fluctuations in asset prices with high frequency fluctuations
in consumption might be solved found by considering consumption in
the long-run. A first strand of the literature concentrates on multiperiod
differences in log consumption, a second concentrates on the
cointegrating relation for consumption. Interestingly, only the (multiperiod)
Euler Equation for the consumer optimization problem is
considered by the first strand of the literature, while the cointegrationbased
literature concentrates exclusively on the (linearized) intertemporal
budget constraint. In this paper, we show that using the first
order condition in the linearized budget constraint to derive an explicit
long-run consumption function delivers an even more striking
strike back.
This paper studies how a central bank's preference for robustness against
model misspecification affects the design of monetary policy in a New-Keynesian
model of a small open economy. Due to the simple model structure,
we are able to solve analytically for the optimal robust policy rule, and we
separately analyze the effects of robustness against misspecification concerning
the determination of inflation, output and the exchange rate. We show that
an increased central bank preference for robustness makes monetary policy
respond more aggressively or more cautiously to shocks, depending on the
type of shock and the source of misspecification.
This paper introduces underground activities and tax evasion into a one sector dynamic general equilibrium model with external effects. The model presents a novel mechanism driving the self-fulfilling prophecies, which is triggered by the reallocation of resources to the underground sector to avoid the excess tax burden. This mechanism differs from the customary one, and it is complementary to it. In addition, the explicit introduction of an (even tiny) underground sector allows to reduce aggregate degree of increasing returns required for indeterminacy, and for having well behaved input demand schedules (in the sense they slope down).
Journal of Economic Literature Classification Numbers: O40, E260
A central problem for the game theoretic analysis of voting is that voting games
have very many Nash equilibria. In this paper, we consider a new refinement
concept for voting games that combines two ideas that appear reasonable for voting
games: First, trembling hand perfection (voters sometimes make mistakes when
casting their vote) and second, coordination of voters with similar interests. We
apply this refinement to an analysis of multicandidate elections under plurality rule
and runoff rule.
For plurality rule, we show that our refinement implies Duverger's law: In all
equilibria, (at most) two candidates receive a positive number of votes. For the case
of 3 candidates, we can completely characterize the set of equilibria. Often, there
exists a unique equilibrium satisfying our refinement; surprisingly, this is even true,
if there is no Condorcet winner. We also consider the equilibria under a runoff rule
and analyze when plurality rule and runoff rule yield different outcomes.
Building on recent work on dynamic interactive epistemology, we
extend the analysis of extensive-form psychological games (Geneakoplos,
Pearce & Stacchetti, Games and Economic Behavior, 1989) to
include conditional higher-order beliefs and enlarged domains of pay-off
functions. The approach allows modeling dynamic psychological
effects (such as sequential reciprocity, psychological forward induction,
and regret) that are ruled out when epistemic types are identified with
hierarchies of initial beliefs. We define a notion of psycholigical sequential
equilibrium, which generalizes the sequential equilibrium notion for
traditional games, for which we prove existence under mild assumptions.
Our framework also allows us to directly formulate assumptions about
"dynamic" rationality and interactive beliefs in order to explore strategic
interaction without assuming that players' beliefs are coordinated on an
equilibrium. In particular, we provide an exploration of (extensive-form)
rationalizability in psychological games.
We provide a summary updated guide for the construction, use and evaluation of
leading indicators, and an assessment of the most relevant recent developments in this
field of economic forecasting. To begin with, we analyze the problem of selecting a
target coincident variable for the leading indicators, which requires coincident indicator
selection, construction of composite coincident indexes, choice of filtering methods,
and business cycle dating procedures to transform the continous target into a binary
expansion/recession indicator. Next, we deal with criteria for choosing good leading
indicators, and simple non-model based methods to combine them into composite indexes.
Then, we examine models and methods to transform the leading indicators into
forecasts of the target variable. Finally, we consider the evaluation of the resulting
leading indicator based forecasts, and review the recent literature on the forecasting
performance of leading indicators.
Abstract
Iterated multiperiod ahead time series forecasts are made using a one-period ahead model, iterated forward for the desired number of periods, whereas direct forecasts are made using a horizon-specific estimated model, where the dependent variable is the multi-period ahead value being forecasted. Which approach is better is an empirical matter: in theory, iterated forecasts are more efficient if correctly specified, but direct forecasts are more robust to model misspecification. This paper compares empirical iterated and direct forecasts from linear univariate and bivariate models by applying simulated out-of-sample methods to 171 U.S. monthly macroeconomic time series spanning 1959 - 2002. The iterated forecasts typically outperform the direct forecasts, particularly if the models can select long lag specifications. The relative performance of the iterated forecasts improves with the forecast horizon.
We analyse the panel of the Greenbook forecasts (sample 1970-1996) and a
large panel of monthly variables for the US (sample 1970-2003) and show that
the bulk of dynamics of both the variables and their forecasts is explained by two
shocks. Moreover, a two factor model which exploits, in real time, information
on many time series to extract a two dimensional signal, produces a degree of
forecasting accuracy of the federal funds rate similar to that of the markets, and,
for output and inflation, similar to that of the Greenbook forecasts. This leads us
to conclude that the stochastic dimension of the US economy is two. We also show
that dimension two is generated by a real and nominal shock, with output mainly
driven by the real shock and inflation by the nominal shock. The implication is
that, by tracking any forecastable measure of real activity and price dynamics, the
Central Bank can track all fundamental dynamics in the economy.
How does the relationship between an investor and entrepreneur depend on the legal
system? In a double moral hazard framework, we show how optimal contracts,
corporate governance, and investor actions depend on the legal system. With better
legal protection, investors give more non-contractible support, demand more downside
protection, and exercise more governance. Investors in better legal systems develop
stronger governance and support competencies. Therefore, when investing in a different
legal systems they behave differently than local investors. We test these predictions
using a hand-collected dataset of European venture capital deals. The empirical
results confirm the predictions of the model.
Abstract
We employ Markov-switching regression methods to estimate fiscal policy feedback rules
in the U.S. for the period 1960-2002. Our approach allows to capture policy regime changes
endogenously. We reach three main conclusions. First, fiscal policy may be characterized,
according to Leeper (1991) terminology, as active from the 1960s throughout the 1980s, switching
gradually to passive in the early 1990s and switching back to active in early 2001. Second,
regime-switching fiscal rules are capable of tracking the time-series behaviour of the U.S. primary
deficit better than rules based on a constant parameter specification. Third, regime-switches in
monetary and fiscal policy rules do not exhibit any degree of synchronization. Our results are
at odds with the view that the post-war U.S. fiscal policy regime may be classified as passive at
all times, and seem to pose a challenge for the specification of the correct monetary-fiscal mix
within recent optimizing macroeconomic models considered suitable for policy analysis.
We explore the determinants of yield differentials between sovereign bonds in the Euro
area. There is a common trend in yield differentials, which is correlated with a measure
of the international risk factor. In contrast, liquidity differentials display sizeable heterogeneity
and no common factor. We present a model that predicts that yield differentials
should increase in both liquidity and risk, with an interaction term whose magnitude and
sign depends on the size of the liquidity differential with respect to the reference country.
Testing these predictions on daily data, we find that the international risk factor is consistently
priced, while liquidity differentials are priced only for a subset of countries and
their interaction with the risk factor is crucial to detect their effect.
This paper brings together two strands of the empirical macro literature:
the reduced-form evidence that the yield spread helps in forecasting output
and the structural evidence on the difficulties of estimating the effect of monetary
policy on output in an intertemporal Euler equation. We show that
including a short-term interest rate and inflation in the forecasting equation
improves the forecasting performance of the spread for future output but the
coefficients on the short rate and inflation are difficult to interpret using a
standard macroeconomic framework. A decomposition of the yield spread
into an expectations-related component and a term premium allows a better
understanding of the forecasting model. In fact, the best forecasting model for
output is obtained by considering the term premium, the short-term interest
rate and inflation as predictors. We provide a possible structural interpretation
of these results by allowing for time-varying risk aversion, linearly related
to our estimate of the term premium, in an intertemporal Euler equation for
output.
We study optimal monetary policy in two prototype economies with sticky prices and credit
market frictions. In the first economy, credit frictions apply to the financing of the capital stock,
generate acceleration in response to shocks and the financial markup (i.e., the premium on
external funds) is countercyclical and negatively correlated with the asset price. In the second
economy, credit frictions apply to the flow of investment, generate persistence, and the financial
markup is procyclical and positively correlated with the asset price. We model monetary policy
in terms of welfare-maximizing interest rate rules. The main finding of our analysis is that strict
inflation stabilization is a robust optimal monetary policy prescription. The intuition is that, in
both models, credit frictions work in the direction of dampening the cyclical behavior of inflation
relative to its credit-frictionless level. Thus neither economy, despite yielding different inflation
and investment dynamics, generates a trade-off between price and financial markup stabilization.
A corollary of this result is that reacting to asset prices does not bear any independent welfare
role in the conduct of monetary policy.
We provide a long term perspective on the individual retirement behavior
and on the future of early retirement. In a cross-country sample, we
find that total pension spending depends positively on the degree of early
retirement and on the share of elderly in the population, which increase
the proportion of retirees, but has hardly any effect on the per-capita pension
benefits. We show that in a Markovian political economic theoretical
framework, in which incentives to retire early are embedded, a political
equilibrium is characterized by an increasing sequence of social security
contribution rates converging to a steady state and early retirement. Comparative
statics suggest that aging and productivity slow-downs lead to
higher taxes and more early retirement. However, when income effects
are factored in, the model suggests that periods of stagnation - characterized
by decreasing labor income - may lead middle aged individuals to
postpone retirement.
Using a structural Vector Autoregression approach, this paper compares the
macroeconomic effects of the three main government spending tools: government
investment, consumption, and transfers to households, both in terms of the size
and the speed of their effects on GDP and its components. Contrary to a common
opinion, there is no evidence that government investment shocks are more
effective than government consumption shocks in boosting GDP: this is true both
in the short and, perhaps more surprisingly, in the long run. In fact, government
investment appears to crowd out private investment, especially in dwelling and in
machinery and equipment. There is no evidence that government investment pays
for itself in the long run, as proponents of the Golden Rule implicitly or explicitly
argue. The positive effects of government consumption itself are rather limited,
and defense purchases have even smaller (or negative) effects on GDP and private
investment. There is also no evidence that government transfers are more effective
than government consumption in stimulating demand.
This paper studies the effects of fiscal policy on GDP, inflation and interest rates
in 5 OECD countries, using a structural Vector Autoregression approach. Its main
results can be summarized as follows: 1) The effects of fiscal policy on GDP tend
to be small: government spending multipliers larger than 1 can be estimated only
in the US in the pre-1980 period. 2) There is no evidence that tax cuts work faster
or more effectively than spending increases. 3) The effects of government spending
shocks and tax cuts on GDP and its components have become substantially weaker
over time; in the post-1980 period these effects are mostly negative, particularly on
private investment. 4) Only in the post-1980 period is there evidence of positive
effects of government spending on long interest rates. In fact, when the real interest
rate is held constant in the impulse responses, much of the decline in the response
of GDP in the post-1980 period in the US and UK disappears. 5) Under plausible
values of its price elasticity, government spending typically has small effects on
inflation. 6) Both the decline in the variance of the fiscal shocks and the change
in their transmission mechanism contribute to the decline in the variance of GDP
after 1980.
Focusing on signaling games, I illustrate the relevance of the rationalizability
approach for the analysis multistage games with incomplete
information. I define a class of iterative solution procedures, featuring a
notion of forward induction: the Receiver tries to explain the Sender's
message in a way which is consistent with the Sender's strategic sophistication
and certain given restrictions on beliefs. The approach is applied to
some numerical examples and economic models. In a standard model with
verifiable messages a full disclosure result is obtained. In a model of job
market signaling the best separating equilibrium emerges as the unique
rationalizable outcome only when the high and low types are sufficiently
different. Otherwise, rationalizability only puts bounds on the education
choices of different types.
This paper suggests that the main (and possibly unique) source of β- and σ- convergence
in GDP per worker (i.e. labor productivity) across Italian regions over the
1980-2000 period is the change in technical and allocative efficiency, i.e. convergence
in relative TFP levels. To reach this conclusion, I construct an approximation of
the production frontier at different points in time using Data Envelope Analysis
(DEA), and measure efficiency as the output-based distance from the frontier. This
method is entirely data-driven, and does not require the specification of any particular
functional form for technology. Changes in GDP per worker can be decomposed
in changes in relative efficiency, changes due to overall technological progress, and
changes due to capital deepening. My results suggest that: (i) differences in relative
TFP are quantitatively important; (ii) while technological progress and capital
deepening are the main, and equally important, forces behind the rightward shift
in the distribution of GDP per worker, convergence in relative TFP is the main
determinant of the change in the distribution's shape.
We study the effects of model uncertainty in a simple New-Keynesian
model using robust control techniques. Due to the simple model structure, we
are able to find closed-form solutions for the robust control problem, analyzing
both instrument rules and targeting rules under different timing assumptions.
In all cases but one, an increased preference for robustness makes monetary
policy respond more aggressively to cost shocks but leaves the response to
demand shocks unchanged. As a consequence, inflation is less volatile and
output is more volatile than under the non-robust policy. Under one particular
timing assumption, however, increasing the preference for robustness has no
effect on the optimal targeting rule (nor on the economy).
The existing studies of unemployment benefit and unemployment duration suggest that reforms
that lower either the level or the duration of benefits should reduce unemployment. Despite the
large number of such reforms implemented in Europe in the past decades, this paper presents
evidence that shows no correlation between the reforms and the evolution of unemployment.
This paper also provides an explanation for this fact by exploring the
interactions between unemployment benefits and social assistance programmes. Unemployed
workers who are also eligible, or expect to become eligible, for some social assistance
programmes are less concerned about their benefits being reduced or terminated. They will not
search particularly intensively around the time of benefit exhaustion nor will come particularly
less choosy about job offers by reducing their reservation wages. Data from the European
Community Household Panel (ECHP) are used to provide evidence to support this argument.
Results show that, in fact, for social assistance recipients the probability of finding a job is not
particularly higher during the last months of entitlement.
We document the presence of a trade-off between unemployment benefits (UB) and employment protection legislation (EPL) in the provision of insurance against labor market risk. Different countries' locations along this trade-off represent stable, hard to modify, politico-economic equilibria. We develop a model in which voters are required to cast a ballot over the strictness of EPL, the generosity of UBs and the amount of redistribution involved by the financing of unemployment insurance. Agents are heterogeneous along two dimensions: imployment status - insiders and outsiders - and skills - low and high. Unlike previous work on EPL, we model employment protection as an institution redistributing among insiders, notably in favour of the low-skill workers. A key implication of the model is that configurations with strict EPL and low UB should emerge in presence of compressed wage structures. Micro data on wage premia on educational attainments and on the strictness of EPL are in line with our results. We also find empirical support to the substantive assumptions of the model on the effects of EPL.
Abstract
We study how public policy can contribute to increase the share of early stage and
high-tech venture capital investments, thus helping the development of active venture
capital markets. A simple extension of the seminal model by Holmstrom and Tirole
(1997) provides a theoretical base for our analysis. We then explore a unique panel of
data for 14 European countries between 1988 and 2001. We have several novel findings.
First, the opening of stock markets targeted at entrepreneurial companies positively
affects the shares of early stage and high-tech venture capital investments; reductions
in capital gains tax rates have a similar, albeit weaker, effect. Second, a reduction in
labor regulation results in a higher share of high-tech investments. Finally, we find no
evidence of a shortage of supply of venture capital funds in Europe, and no evidence
of an effect of increased public R&D spending on the share of high-tech or early stage
venture capital investments.
Abstract
The aim of this paper is to propose a new method for forecasting Italian
inflation. We expand on a standard factor model framework (see Stock and
Watson (1998)) along several dimensions. To start with we pay special
attention to the modeling of the autoregressive component of the inflation.
Second, we apply forecast combination (Granger (2000) and Pesaran and
Timmermann (2001)) and generate our forecast by averaging the predictions
of a large number of models. Third, we allow for time variation in parameters
by applying rolling regression techniques, with a window of three-years of
monthly data. Backtesting shows that our strategy outperforms both the
benchmark model (i.e. a factor model which does not allow for model
uncertainty) and additional univariate (ARMA) and multivariate (VAR)
models. Our strategy proves to improve on alternative models also when
applied to turning point prediction.
This paper integrates a theory of equilibrium unemployment into a monetary model
with nominal price rigidities. The model is used to study the dynamic response of the
economy to a monetary policy shock. The labor market displays search and matching
frictions and bargaining over real wages and hours of work. Search frictions generate unemployment in equilibrium. Wage bargaining introduces a microfounded real wage
rigidity. First, I study a Nash bargaining model. Then, I develop an alternative
bargaining model, which I refer to as right-to-manage bargaining. Both models have
similar predictions in terms of real wage dynamics: bargaining significantly reduces
the volatility of the real wage. But they have different implications for inflation
dynamics: under right-to-manage, the real wage rigidity also results in smaller
fluctuations of inflation. These findings are consistent with recent evidence
suggesting that real wages and inflation only vary by a moderate amount in
response to a monetary shock. Finally, the model can explain important features of
labor-market fluctuations. In particular, a monetary expansion leads to a rise in job
creation and to a hump-shaped decline in unemployment.
This paper explores the quantitative plausibility of three candidate explanations for the
European productivity slowdown with respect to the US. The empirical plausibility of the
common wisdom on the topic (the "IT usage" hypothesis) is found to crucially depend on
how IT-using industries are defined. If a narrow definition is chosen, the IT usage
hypothesis no longer explains the whole of the EU productivity slowdown but just about
55% of it, with the remaining part to be attributed to other factors than IT, as argued in the
IT irrelevance view. No room is left for IT-producing industries as another potential
vehicle for the US-EU productivity growth gap, instead.
Abstract
Financial intermediaries can choose the extent to which they want to be active
investors, providing valuable services like advice, support and corporate governance.
We examine the determinants of the decision to become an active financial
intermediary using a hand-collected dataset on European venture capital deals. We
find organizational specialization to be a key driver. Venture firms which are
independent and focused on venture capital alone get more involved with their
companies. The human capital of venture partners is another key driver of active
financial intermediation. Venture firms whose partners' have prior business
experience or a scientific education provide more support and governance. These
results have implications for prevailing views of financial intermediation, which largely
abstract from issues of specialization and human capital.
This paper discusses the recent literature on the role of the state in economic development.
It concludes that government incentives to enact sound policies are key to economic success.
It also discusses the evidence on what happens after episodes of economic and political
liberalizations, asking whether political liberalizations strengthen government incentives to
enact sound economic policies. The answer is mixed. Most episodes of economic
liberalizations are indeed preceded by political liberalizations. But the countries that have
done better are those that have managed to open up the economy first, and only later have
liberalized their political system.
Abstract
This paper studies empirically the effects of and the interactions amongst economic and
political liberalizations. Economic liberalizations are measured by a widely used indicator
that captures the scope of the market in the economy, and in particular of policies
towards freer international trade (cf. Sachs and Werner 1995, Wacziarg and Welch 2003).
Political liberalizations correspond to the event of becoming a democracy. Using a
difference-in-difference estimation, we ask what are the effects of liberalizations on
economic performance, on macroeconomic policy and on structural policies. The main
results concern the quantitative relevance of the feedback and interaction effects
between the two kinds of reforms. First, we find positive feedback effects between
economic and political reforms. The timing of events indicates that causality is more
likely to run from political to economic liberalizations, rather than viceversa, but we
cannot rule out feedback effects in both directions. Second, the sequence of reforms
matters. Countries that first liberalize and then become democracies do much better
than countries that pursue the opposite sequence, in almost all dimensions.
We develop a structural model of a small open economy with gradual exchange rate pass-through and endogenous inertia in inflation and output. We then estimate the model by matching the implied impulse responses with those obtained from a VAR model estimated on Swedish data. Although our model is highly stylized it captures very well the responses of output, domestic and imported inflation, the interest rate, and the real exchange rate. However, in order to account for the observed persistence in the real exchange rate and
the large deviations from UIP, we need a large and volatile premium on foreign exchange.
Firing frictions and renegotiation costs affect worker and firm preferences
for rigid wages versus individualized Nash bargaining in a standard
model of equilibrium unemployment, in which workers vary by
observable skill. Rigid wages permit savings on renegotiation costs and
prevent workers from exploiting the firing friction. For standard calibrations,
the model can account for political support for wage rigidity
by both workers and firms, especially in labor markets for intermediate
skills. The firing friction is necessary for this effect, and reinforces
the impact of both turbulence and other labor market institutions on
preferences for rigid wages.
We analyse the evolution of the business cycle in the accession countries, after a careful examination of the seasonal properties of the available series and the required modification of the cycle dating procedures. We then focus on the degree of cyclical concordance within the group of accession countries, which turns out to be in general lower than that between the existing EU countries (the Baltic countries constitute an exception). With respect to the Eurozone, the indications of synchronization are also generally low and lower relative to the position obtaining for countries taking part in previous enlargements (with the exceptions of Poland, Slovenia and Hungary). In the light of the optimal currency area literature, these results cast doubts on the usefulness of adopting the euro in the near future for most accession countries, though other criteria such as the extent of trade and the gains in credibility may point in a different direction.
The accession of ten countries into the European Union makes the
forecasting of their key macroeconomic indicators such as GDP
growth, inflation and interest rates an exercise of some importance.
Because of the transition period, only short spans of reliable time series
are available which suggests the adoption of simple time series models
as forecasting tools, because of their parsimonious specification and
good performance. Nevertheless, despite this constraint on the span of
data, a large number of macroeconomic variables (for a given time
span) are available which are of potential use in forecasting, making the
class of dynamic factor models a reasonable alternative forecasting tool.
We compare the relative performance of the two forecasting approaches,
first by means of simulation experiments and then by using data for five
Acceding countries. We also evaluate the role of Euro-area information for
forecasting, and the usefulness of robustifying techniques such as
intercept corrections and second differencing. We find that factor models
work well in general, even though there are marked differences across
countries. Robustifying techniques are useful in a few cases, while
Euro-area information is virtually irrelevant.
The hazard rate of investment is derived within a real option model, and its properties
are analyzed in order to directly study the relation between uncertainty and investment.
Maximum likelihood estimates of the hazard are calculated using a sample of MNEs that
have invested in Central and Eastern Europe over the period 1990-1998. Employing a
standard, non-parametric specification of the hazard, our measure of uncertainty has a
negative effect on investment, but the reduced-form model is unable to control for nonlinearities
in the relationship. The structural estimation of the option-based hazard is
instead able to account for the non-linearities and exhibits a significant value of waiting,
though the latter is independent from our measure of uncertainty. This finding supports
the existence of alternative channels through which uncertainty can affect investment.
Equilibrium business cycle models have typically less shocks than variables.
As pointed out by Altug, 1989 and Sargent, 1989, if variables are measured with
error, this characteristic implies that the model solution for measured variables has
a factor structure. This paper compares estimation performance for the impulse
response coefficients based on a VAR approximation to this class of models and
an estimation method that explicitly takes into account the restrictions implied
by the factor structure. Bias and mean squared error for both factor based and
VAR based estimates of impulse response functions are quantified using, as data
generating process, a calibrated standard equilibrium business cycle model. We
show that, at short horizons, VAR estimates of impulse response functions are less
accurate than factor estimates while the two methods perform similarly at medium
and long run horizons.
This paper aims to test some implications of the Fiscal theory of
the price level (FTPL). We develop a model similar to Leeper (1991)
and Woodford (1996), but extended so to generate real effects of fiscal
policy also in the "Ricardian" regime, via an OLG demographic
structure. We test on the data the predictions of the FTPL as incorporated
in the model. We find that the US fiscal policy in the period
1960-1979 can be classified as "Non-Ricardian", while it is "Ricardian"
since 1990. According to our analysis, the fiscal theory of the
price level characterizes one phase of the post-war US history.
We use a quantitative model of the U.S. economy to analyze the response
of long-term interest rates to monetary policy, and compare the model results
with empirical evidence. We find that the strong and time-varying yield curve
response to monetary policy innovations found in the data can be explained by
the model. A key ingredient in explaining the yield curve response is central
bank private information about the state of the economy or about its own
target for inflation.
In this paper a simple dynamic optimization problem is solved with the help of
the recursive saddle point method developed by Marcet and Marimon (1999). According
to Marcet and Marimon, their technique should yield a full characterization
of the set of solutions for this problem. We show though, that while their method
allows us to calculate the true value of the optimization program, not all solutions
which it admits are correct. Indeed, some of the policies which it generates as
solutions to our problem, are either suboptimal or do not even satisfy feasibility.
We identify the reasons underlying this failure and discuss its implications for the
numerous existing applications.
We analyze welfare maximizing monetary policy in a dynamic general equilibrium two-country
model with price stickiness and imperfect competition. In this context, a typical terms
of trade externality affects policy interaction between independent monetary authorities. Unlike
the existing literature, we remain consistent to a public finance approach by an explicit
consideration of all the distortions that are relevant to the Ramsey planner. This strategy entails
two main advantages. First, it allows an accurate characterization of optimal policy in an economy
that evolves around a steady state which is not necessarily efficient. Second, it allows to describe
a full range of alternative dynamic equilibria when price setters in both countries are completely
forward-looking and households' preferences are not restricted. We study optimal policy both in
the long-run and in response to shocks, and we compare commitment under Nash competition
and under cooperation. By deriving a second order accurate solution to the policy functions,
we also characterize the welfare gains from international policy cooperation.
Abstract
In this paper we concentrate on the hypothesis that the empirical
rejections of the Expectations Theory (ET) of the term structure of interest
rates can be caused by improper modelling of expectations. Our
starting point is an interesting anomaly found by Campbell-Shiller (1987),
when by taking a VAR approach they abandon limited information
approach to test the ET, in which realized returns are taken as a proxy for
expected returns. We use financial factors and macroeconomic information
to construct a test of the theory based on simulating investors'
effort to use the model in 'real time' to forecast future monetary policy
rates. Our findings suggest that the importance of fluctuations of risk
premia in explaining the deviation from the ET is reduced when some
forecasting model for short-term rates is adopted and a proper evaluation
of uncertainty associated to policy rates forecast is considered.
Employment protection legislations (EPL) are not enforced uniformly across the board. There are a number of exemptions to the coverage
of these provisions: firms below a given threshold scale and workers with temporary contracts are not subject to the most restrictive rovisions. This within country variation in enforcement allows to make inferences on the impact of EPL which go beyond the usual cross-country approach. In this paper we develop a simple model which explains why these exemptions are in place to start with. Then we empirically assess the effects of EPL on dismissal probabilities, based on a double-difference approach. Our results are in line with the predictions of the theoretical model. Workers in firms exempted from EPL are more likely to be laid-off We do not observe this effect in the case of temporary workers. There is no effect of the exemption threshold on the growth of firms.
We present a theoretical model of a parliamentary democracy, where
party structures, government coalitions and fiscal policies are endogenously
determined. The model predicts that, relative to proportional elections, majoritarian
elections reduce government spending because they reduce party
fragmentation and, therefore, the incidence of coalition governments. Party
fragmentation can persist under majoritarian rule if party supporters are
unevenly distributed across electoral districts. Economic and political data,
from up to 50 post-war parliamentary democracies, strongly support our
joint predictions from the electoral rule, to the party system, to the type of
government, and to government spending.
While there is consensus on the need to raise the time spent in the market by
European women, it is not clear how these goals should be achieved. Tax wedges,
assistance in the job search process, and part-time jobs are policy instruments that
are widely debated in policy circles. The paper presents a simple model of labour
supply with market frictions and heterogenous home production where the effects of
these policies can be coherently analysed. We show that subsidies to labour market
entry increases women's entrance in the labour market, but they also increase exits from
the labour market, with ambiguous effect on employment. Subsidies to part-time do
increase employment, but they have ambiguous effects on hours and market production.
Finally, reductions in taxes on market activities that are highly substitutable with home
production have unambiguous positive effects on market employment and production.
We examine a model of contracting where parties interact repeatedly and can contract
at any point in time, but writing enforceable contracts is costly. A contract can
describe contingencies and actions at a more or less detailed level, and the cost of writing
a contract is proportional to the amount of detail. We consider both formal (externally
enforced) and informal (self-enforcing) contracts. The presence of writing costs has important
implications both for the optimal structure of formal contracts, particularly the
tradeo. between contingent and spot contracts, and for the interaction between formal
and informal contracting. Our model sheds light on these implications and generates a
rich set of predictions about the determinants of the optimal mode of contracting.
This paper presents a simple model of imperfect labor markets with endogenous labor market participation and home production. We show that a two-sector economy (home and market) implies a three-state labor market when labor market imperfections take the form of an irreversible entry cost incurred by workers. This simple framework brings several results. First, it delivers an expression for the employment rate and as side-products, a measure of the unemployment rate and the size of the labour force. Second, it rationalizes several empirical works on the definition of unemployment in labor force surveys. Third, it derives endogenously all flows between three labour market states. Fourth, a calibration of the model rationalizes di.erences in employment rates: in the US., we find a market productivity premium of +30% and market frictions of -15% compared to France. Finally, the model is a very simple reduced form of search models with which it is fully consistent: the irreversible entry cost is the opportunity cost of search and depends on aggregate conditions.
The existing literature ignores the fact that in most European countries the
strictness of Employment Protection Legislation (EPL) varies across the firm size
distribution. In Italy firms are obliged to rehire an unfairly dismissed worker only
if they employ more than 15 employees. Theoretically, the paper solves a
baseline model of EPL with threshold effects, and shows that firms close to the
threshold are characterized by an increase in inaction and by a reluctance to
grow. Empirically, the paper estimates transition probability matrices on firm
level employment using a longitudinal data set based on Italian Social Security
(INPS) records, and finds two results. First, firms close to the 15 employees
threshold experience an increase in persistence of 1.5 percent with respect to a
baseline statistical model. Second, firms with 15 employees are more likely to
move backward than upward. Finally, the paper tests the effect of a 1990 reform
which tightened the regulation on individual dismissal only for small firms. It
finds that the persistence of small firms relative to large firms increased
significantly. Overall, these threshold effects are significant and robust, but
quantitatively small.
We consider a society that has to elect an official who provides a public service
for the citizens. Potential candidates differ in their competence and every potential
candidate has private information about his opportunity cost to perform the task
of the elected official. We develop a new citizen candidate model with a unique
equilibrium to analyze citizens' candidature decisions.
Under some weak additional assumptions, bad candidates run with a higher
probability than good ones, and for unattractive positions, good candidates freeride
on bad ones. We also analyze the comparative static effects of wage increases
and cost of running on the potential candidates' entry decisions.
This paper examines competition in a liberalized market, with reference to some key features of the natural gas industry. Each firm has a low (zero) marginal cost core capacity, due to long term contracts with take or pay obligations, and additional capacity at higher marginal costs. The market is decentralized and the firms decide which customers to serve, competing then in prices. We show that under both sequential and simultaneous entry, there is a strong incentive to segment the market: when take-or-pay obligations are still to be covered, entering and competing for the same customers implies low margins. If instead a firm is left as a monopolist on a fraction of the market, xhausting its obligation, it has no further incentive to enter a second market, where the rival will be monopolist as well. Hence, we obtain entry without competition. Antitrust ceilings do not prevent such an outcome while a wholesale pool market induces generalized competition and low margins in the retail segment.
belongs to the realm of politics. We evaluate how political constraints shape the social
security system in six countries - France, Germany, Italy, Spain, the UK and the US -
under population aging. Two main aspects of the aging process are relevant to the
analysis. First, the increase in the dependency ratio - the ratio of retirees to workers
- reduces the average profitability of the unfunded social security system, thereby
inducing the agents to reduce the size of the system by substituting their claims
towards future pensions with more private savings. Second, an aging electorate leads
to larger systems, since it increases the relevance of pension spending on the
policy-makers' agenda. The overall assessment from our simulations is that the political
aspect dominates in all countries, albeit with some differences. Spain, the fastest aging
country, faces the largest increase in the social security contribution rate. When labor
market considerations are introduced, the political effect still dominates, but it is less
sizeable. Country specific characteristics (not accounted for in our simulations), such as
the degree of redistribution in the pension system and the existence of family ties in
the society, may also matter. Our simulations deliver a strong policy implication: an
increase in the effective retirement age always decreases the size of the system chosen
by the voters, while often increasing its generosity. Finally, delegation of pension policy
to the EC may reduce political accountability and hence help to reform the systems.
rival firms operating in an uncertain environment. We test the implications of the model
through a discrete choice panel data sample of MNEs that have invested in Central and
Eastern Europe over the period 1990-1997. Interacting the measure of rivals investment
in country-industry pairs with uncertainty we find strong evidence for oligopolistic reaction,
especially through the channel of Bayesian learning postulated by the model. The
findings are robust with respect to different model specifications.
trade costs can generate sizable increases in trade volumes over time. A fall in trade
costs has two e.ects on the volume of trade. First, for given factor endowments, it
raises the degree of specialization of countries, leading to a larger volume of trade
in the short run. Second, it raises the factor price of each country's abundant
production factor, leading to diverging paths of relative factor endowments across
countries and a rising degree of specialization. A simulation exercise shows that
a fall in trade costs over time produces a non-linear increase in the trade share of
output as in the data. Even when elasticities of substitution are not particularly
high, moderate reductions in trade costs lead to large trade volumes over time.
menu of labor market outcomes. We document this neglected trade-off of
globalization for a sample of Indian manufacturing firms. On the one hand,
the employees of firms subject to foreign competition face a more uncertain
stream of earnings and riskier employment prospects. On the other, they enjoy
a more rapid career and/or have more opportunities to train and upgrade
their skills. The negative uncertainty costs and the positive incentive effects
of globalization are thus twin to each other. Concentrating on just one side
of the coin gives a misleading picture of globalization.
employment protection legislation (EPL) in the provision of insurance against labour
market risk. The mix of quantity restrictions and price regulations adopted by the
various countries would seem to correspond to a stable politico-economic equilibrium.
We develop a model in which voters are required to cast a ballot over the strictness of
EPL and over the generosity of UB. Agents are heterogeneous along two dimensions:
employment status - there are insiders and outsiders - and skills - low and high skills.
We show that if there exists a majority of low-skill insiders, the voting game has a
politico-economic equilibrium with low UB and high EPL; otherwise, the equilibrium
features high UB and low EPL. Another testable implication of the model is that a
larger share of elderly workers increases the demand for EPL. Panel data on institutions
and on the age and educational structures of the populations are broadly in line with
our results. We also find that those favouring EPL over UB in a public opinion poll
carried in 2001 in Italy have precisely the same characteristics predicted by our model.
Policies are typically chosen by politicians and bureaucrats. This paper investigates the criteria that should lead a society to allocate policy tasks to elected policymakers (politicians) or non elected bureaucrats. Politicians tend to be preferable for tasks that have the following features: they do not involve too much
specific technical ability relative to effort; there is uncertainty ex ante about ex post preferences of the public and flexibility is valuable; time inconsistency is not an issue; small but powerful vested interests do not have large stakes in the policy outcome; effective decisions over policies require taking into account policy
complementarities and compensating the losers; the policies imply redistributive conflicts among large groups of voters. The reverse apply to the attribution of prerogatives to bureaucrats.
area countries, evaluate the degree of syncronization, and compare the results with the UK and the US. Fourth, we construct indices of business cycle diffusion, and assess how spread are cyclical movements throughout the economy. Finally, we repeat the dating exercise using monthly industrial production data, to evaluate whether the higher sampling frequency can compensate the higher variability of the series and produce a more accurate dating.
from a large data set for forecasting, namely, the use of an automated model selection
procedure, the adoption of a factor model, and single-indicator-based forecast pooling. The
comparison is conducted using a large set of indicators for forecasting US inflation and GDP
growth. We also compare our large set of leading indicators with purely autoregressive
models, using an evaluation procedure that is particularly relevant for policy making. The
evaluation is conducted both ex-post and in a pseudo real time context, for several forecast
horizons, and using both recursive and rolling estimation. The results indicate a preference for
simple forecasting tools, with a good relative performance of pure autoregressive models, and
substantial instability in the leading characteristics of the indicators.
inflation and GDP growth. Our evaluation is based on using the variables in the ECB Euroarea
model database, plus a set of similar variables for the US. We compare the forecasting
performance of each indicator with that of purely autoregressive models, using an evaluation
procedure that is particularly relevant for policy making. The evaluation is conducted both expost
and in a pseudo real time context, for several forecast horizons, and using both recursive
and rolling estimation. We also analyze three different approaches to combining the
information from several indicators. First, we discuss the use as indicators of the estimated
factors from a dynamic factor model for all the indicators. Second, an automated model
selection procedure is applied to models with a large set of indicators. Third, we consider
pooling the single indicator forecasts. The results indicate that single indicator forecasts are on
average better than those derived from more complicated methods, but for them to beat the
autoregression a different indicator has to be used in each period. A simple real-time
procedure for indicator-selection produces good results.
rule-based empirical macro models for the analysis of monetary policy.
These models, based on the conventional view that inflation
stabilization should be a concern of monetary policy only, have typically neglected
the role of fiscal policy. We start with the evidence that a baseline
VAR-augmented Taylor rule can deliver recurrent mispredictions of
inflation in the U.S. before 1987. We then show that a fiscal feed-back rule, in
which the primary deficit reacts to both the output gap and the
government debt, can well characterize the behavior of fiscal policy throughout the
sample. However, by employing Markov-switching methods, we find
evidence of substantial instability across fiscal regimes. Yet this precisely happens
\QTR{it}{before 1987}. We then augment the monetary VAR\ with a
fiscal policy rule and control for the endogenous regime switches for both
rules. We find that only over time windows belonging to the pre-1987 period
the model based on the two rules can predict the behavior of \ inflation
better than the one based just on the monetary policy rule. \QTR{it}{After
1987}, when fiscal policy is estimated to switch to a regime of fiscal discipline,
the monetary-fiscal mix can be appropriately described as a regime of
monetary dominance. Over this period a monetary policy rule based
model is always a better predictor of the inflation behavior than the one
comprising both a monetary and a fiscal rule.
of the optimal monetary policy design problem as well as of simple feedback
rules. The international relative price channel is emphasized as the one peculiar
to the open economy dimension of monetary policy. Hence flexibility in
the nominal exchange rate enhances such channel. We first show that a feature
of the optimal policy under commitment, unlike the one under discretion,
is to entail stationary nominal exchange rate and price level. We show that
this property characterizes also a regime of fixed exchange rates. Hence, in
evaluating the desirability of such a regime, this benefit needs to be weighed
against the cost of excess smoothness in the terms of trade. We show that
there exist combinations of the parameter values that make a regime of fixed
exchange rates more desirable than the discretionary optimal policy. When the
economy is sufficiently open, this happens for a high relative weight assigned to
output gap variability in the Central Bank's loss function and for high values of
the elasticity substitution between domestic and foreign goods. We draw from
this interesting conclusions for a modern version of the optimal currency area
literature.
welfare-state spending - display systematic patterns in the vicinity of
elections? And do such electoral cycles differ among political systems?
We investigate these questions in a data set encompassing sixty democracies
from 1960-98. Without conditioning on the political system, we find
that taxes are cut before elections, painful fiscal adjustments are postponed
until after the elections, while welfare-state spending displays no
electoral cycle. Our subsequent results show that the pre-election tax cuts
is a universal phenomenon. The post-election fiscal adjustments (spending
cuts, tax hikes and rises in surplus) are, however, only present in
presidential democracies. Moreover, majoritarian electoral rules alone are
associated with pre-electoral spending cuts, while proportional electoral
rules are associated with expansions of welfare spending both before and
after elections.
on the distribution of production factors in the world and parameter values, allows for
worldwide factor price equalization or complete specialization. We explore the dynamics
of the model under different parameter values, and relate our theoretical results to the
empirical literature that studies the determinants of countries' income per capita growth
and levels. In general, the model is capable of generating predictions in accordance with
the most important ndings in the empirical growth literature. At the same time, it
avoids some of the most serious problems of the (autarkic) neoclassical growth model.
We propose a novel methodology to deal with model uncertainty based on thick modeling, i.e. on considering a multiplicity of predictive models rather than a single predictive model. We show that portfolio allocations based on a thick modelling strategy sistematically overperforms thin modelling.