Should the 2024 Election Change Your Portfolio?

Though locked in a struggle over whose method rules the social sciences, the economist and political science at least agreed on their division of labor: the former focuses on production and profit; the latter studies politics and tracks elections. No longer. Increasingly, economists speak of the need to create a so-called “election portfolio” that tracks the winning chances of a candidate, investing in industries they are likely to support and going short on those that might lose out. Political polarization, the theory goes, means holding a diversified portfolio is no longer enough to hedge against an aggressive president who subsidizes specific sectors to the detriment of others. 

At its root, the argument deals with the efficiency of markets in incorporating the presidency as a variable that might influence future cash flows. In an article The Economist ran this week arguing for the relevance of the election portfolio, they noted how when the Democrats won Georgia in 2021, giving them the Senate and the House, treasury yields rose by 0.1 percentage points. Given the Biden administration spending largesse that followed, the yield move seems modest in hindsight. It would appear that markets have not fully adapted to the consequences of a president’s public policies and their implications for affected industries. Holding a portfolio that tracks Biden’s election chances could yield potential arbitrage opportunities. If markets underprice how much renewables will benefit from Biden’s patronage and underestimate how much coal will lose out, going long on renewables and shorting coal could mean substantial returns.  

However, the election portfolio’s relevance relies on Trump and Biden having radically different economic priorities. It depends, for instance, on renewables doing abnormally well if Biden is president and coal reversing its decline with Trump in office. Yet JPMorgan Chase’s analysis of previous elections highlighted that the S&P 500 grows by pretty much the same amount whether a Republican or Democrat is in office. While political polarization has undoubtedly increased, meaning parties no longer represent a unified front on economic issues, polarization has also amplified inter-party conflicts. President Biden’s Build Back Better Plan was radically cut down into the Inflation Reduction Act, minimizing the potential impact his environmental subsidies could have had. 

Besides, political rhetoric only weakly tracks the government’s investment strategy. Despite pushing for more green subsidies, a 2021 report by Public Citizen, a left-leaning non-profit, found that President Biden had a higher per-month approval rate of offshore oil and gas drilling permits than Trump had in his first three years in office. Similarly, while Trump lamented how wind turbines allegedly kill whales, Republican-governed states lead Democrat ones in wind turbine development. Ultimately, national economic trends matter more than who happens to sit in office. On balance, both the Biden and Trump administrations favor protectionism, industrial policy promoting American manufacturing against China, and increased military spending (although Trump would export less of it to European allies). Moreover, since the executive has limited legislative ability, a divided Congress effectively removes much of its economic influence. For instance, which party wins Congress, not who’s president, will determine whether Trump’s tax cuts persist when they expire in 2025. 

Finally, markets are forward-looking, whereas government investment impacts GDP, which is reactive. In an analysis by Dimensional Fund Advisors, an investment firm, plotting GDP against equity premiums in the same year yields weakly correlated results. But plotting a given year’s GDP against the previous year’s equity premiums demonstrates a clear positive correlation. Because markets look to the future, the relationship between the stock market and GDP lags by at least a year. In other words, should a Biden or Trump presidency presage growth in specific industries against others, markets have probably already factored in those trends in current prices. 

The growing concern with hedging a portfolio against the presidency reflects worries about American democracy’s decline in both left and right circles. In a liberal democracy, a robust rule of law implies that the office of the presidency should only marginally impact an investor’s portfolio. Indeed, as The Economist noted, politics usually sways the market in emerging economies. Hence, the call to hedge your portfolio highlights the concern of rot in America’s liberal democracy. Two famous democracy indexes, Freedom House’s “Freedom in the World” and The Economist Intelligence Unit (EIU) Democracy Index, have increasingly bumped down the score of U.S. democracy. In Freedom House’s reporting, the U.S. scores roughly 10 points lower (on a scale of 100) than other Western democracies, like Germany or Canada. At the same time, the EIU classifies America as a “flawed democracy,” in the same category as India, while most other Western democracies are “fully free.” 

News headlines have followed the pessimistic trend, including a report last year by the Brookings Institution, a liberal think-tank, on “Understanding Democratic Decline in the United States.” But dig deeper into the rankings, and the pessimism seems unwarranted. Neither Freedom House nor the EIU claim that the rule of law in America has weakened or that civil liberties have eroded. Instead, the U.S. score has dropped on alleged evidence of increasing racial income inequality (in Freedom House) and an uptick in political polarization (for the EIU). The merits of these concerns aside, neither deals with the democratic process itself, even by stretching the definition of “liberal democracy” to the limit. 

The perverse effects of political polarization and rising income inequality are easily understandable. Few in America today claim they should not be investigated thoroughly and addressed by government policies. But the belief that the next presidency will upend markets so drastically that investors should radically change their portfolios depends on a view of politics influencing economics that better fits a country under the thumb of strongmen or generals than the United States, with its strong rule of law. Let the political theorists worry about politics and stick to orthodox portfolio theory. 

The Dawn of Freedom: How the US Should Support Iran’s Demonstrators

Mahsa Amini, a young Iranian-Kurdish woman, has become a martyr for Iranians seeking freedom. Arrested for not wearing her hijab according to the Islamic Republic’s fundamentalist reading of Islamic Law, Iran’s morality police—the Guidance Patrol—beat her to death while she was in their custody. Protests ensued, with Iranian women leading the charge; as the authorities rain down fire on them, they defiantly chant “zan, zendegi, azadi”: women, life, freedom.

 Recent years have demonstrated that unrest and the Islamic Republic are intertwined. But today’s protests—three weeks strong—are markedly different from anything that came before. Iran’s most widespread uprisings in the last decade were between 2017 and 2019, caused by inflation and food shortages. This time around, however, the protests did not morph from economic to political; they were politically charged from the start. Mandatory hijab is a pillar of the Islamic Republic’s reign of terror, instituted shortly after the 1979 revolution: to challenge forced veiling is to challenge the bedrock of the regime’s power. 

In the face of such unrest—with protests sprawling across over 80 cities—Iran’s largest reformist party, the Union of Islamic Iran People’s Party, recently demanded a repeal of obligatory hijab laws and respect for peaceful demonstrations. Though remote from the corridors of power, the party is still legal—run by the former aids of ex-president Khatami, and thus operating in Iran’s Islamist political environment. Yet contrary to previous waves of demonstrations like the 2009 Green Movement protests over election rigging, today any hope of the Islamic Republic honoring the democratic promises of its constitution, including free elections and basic civil liberties, is long gone. That a legal party in Iran wishes to modify a foundational doctrine of the Islamic Republic’s ideology just goes to show how little legitimacy the regime possesses in the eyes of the Iranian people. Today’s protests seek regime change, far beyond reform—hence chants like “death to the dictator,” referencing Iran’s Supreme Leader Ali Khamenei. 

Unsurprisingly for a government founded on thuggery, the regime’s crackdown on the protests has been unsparingly cruel. Iran Human Rights, an Oslo-based civil rights group, estimates authorities have murdered over 154 protestors and detained scores of others. Police shoot freely at protestors, who, though mostly unarmed, fight back courageously. Sources indicate that as high-schoolers joined the demonstrations, they too were killed by the regime’s security forces; authorities secretly buried one young victim, Nika Shakarami, while threatening her family to keep her death secret. Although Iranian president Ebrahim Raisi has sworn to “deal decisively” with the discontented masses—signaling yet more repression to come—the government cannot easily extinguish their rage. Although the Islamic Republic can survive through this round of bloodshed—the massacre of innocent citizens is rooted in the regime’s history—its loyalist core shrinks by the day. With a population of over 60% under the age of 30 according to some estimates, most Iranians see the state’s Islamist ideology as antithetical to their values and priorities. By severing the social contract with the citizenry, Khamenei, Raisi, and the rest of the nation’s clerical leadership have committed themselves to maintaining order through sheer terror. 

Yet terror is no panacea for the regime’s woes. As ideological purists become rarer by the day, the clerics will have to increasingly empty their pockets to maintain the state’s repressive apparatus. The revolutionary class—electrified by Khomeinei’s radical islamist strand of Shi’a Islam—will continue to die off, yielding to an elite of selfish condottieri, thugs who will happily brutalize their fellow citizens for power and coin, but who are devoid of any meaningful ideological commitment to the Islamic Republic as a political project.

The West can play a major role in undermining the Islamic Republic’s foundations—to the point of triggering regime change—by preventing the clerical elite from financing their repressive agents. While sanctions have choked off major sectors of the Iranian economy, regime loyalists like the Iranian Revolutionary Guard (IRGC) still run sophisticated operations in secret. The drug, weapons, and petroleum trades are particular favorites; yet the Guards run everything from the construction sector to telecommunications as well. Cracking down on these sources of revenue—seizing trade in goods that violate international law, sanctioning the Guards’ contacts in the West—would foster increased competition for resources between the ideological clerics and their henchmen, who will demand ever-higher recompense for their savage services. 

Just as many a Roman emperor was massacred by his imperial guard, so can the IRGC bring down any cleric in power—even Khamenei himself. As the clerics and their ideological soft power die off, only the praetorians will remain. And if the struggle for financial resources between them is strong enough, they will eventually fracture in the face of a citizen uprising.

Finally, when considering how the West should react, it is helpful to consider the Obama administration’s catastrophic response to the 2009 Green Movement. Instead of capitalizing on a wave of democratic discontent, and accepting the leaders of the movement’s plea for American aid, the Obama administration prioritized peace with the Islamists. Credible reporting by Wall Street Journal reporter Jay Solomon subsequently revealed the president had withheld aid to the protests’ leaders, despite the promise of helping democratic movements rise up against dictators being standard US policy. The Biden administration should not make the same mistake today: the thugs in charge of Iran do not represent the will of the Iranian people, and cannot be trusted to keep any agreement akin to the now-dead Nuclear Deal. To seek concord with the ayatollahs would demonstrate US weakness and crush Iranian hopes for American support. Contrary to Obama’s failed policy of negotiating with the clerics, Biden and his advisors must make the utmost effort to contact today’s protest leaders, severed from contact with the Western world, with the aim of facilitating eventual regime change. 

 The clerics in Iran took years to overthrow the Shah. Now, the countless waves of Iranian protests are starting to bear their fruits: for the first time since 1979, the clerics tremble before the fervor of the Iranian people claiming their fundamental rights. If not today, then tomorrow, or next week, or next year—the Islamic Republic will fall. The mullahs must now choose: willingly surrender power to the people, in hopes of a secure exit out of politics, or be dragged out onto the streets by the fury of the masses they so long oppressed.

War and Peace: An End to Forever War in Ukraine?

Vladimir Putin has bitten off more than he can chew in invading Ukraine. To be sure, the course of the war has surprised many: the author of this article, along with most Western analysts, worried Russian troops would march into Kyiv with ease, reenacting the lightning seizure of Crimea in 2014. 

Instead, the Russians have faced a powerful Western democratic coalition ready to supply Kyiv with arms and a determined fighting force in the Ukrainian population, military and civilian alike. While Putin’s initial strategy of rapidly taking Kyiv faltered by April, his troops made slow yet marked gains in Ukraine’s eastern and southern provinces throughout the summer. When Volodymyr Zelenskyy, Ukraine’s president, announced in August plans for a Ukrainian counter-offensive to take back the key southern city of Kherson, few were optimistic about its chances. Most Western media outlets urged caution; despite sympathies for Ukraine’s struggle against Russian aggression, many feared switching to the offensive would only yield Russia more momentum. 

Ukraine has surprised the world once more. In early September, Ukraine’s generals brilliantly outwitted the Kremlin by focusing its attention on Kherson while secretly coordinating a full-scale assault on Russian positions in the country’s northeast. The ruse worked spectacularly: on September 11th, Valery Zaluzhny, Ukraine’s most important general, shared that the country’s fighters had liberated over 3,000 square miles of land the Russians had spent months capturing. With catastrophe looming, Putin faces a difficult choice: continue the war at risk of churning through even more of his troops or bear the humiliation of negotiating their retreat. Mr. Zelenskyy must make an equally thorny decision: should he capitalize on his forces’ momentum to hammer the Russians back across the border—10,000 of them caught between his determined soldiers and the veritable anvil that is the Oskil river—or use chaos at the Kremlin to negotiate a profitable peace?

Ultimately, Western support for Ukraine’s efforts will determine both men’s choices. Despite sky-high energy prices—buoyed by Putin’s decision to close his gas pipelines to Europe, sending markets into turmoil—politicians across the aisle from America to Germany maintain their commitment to a democratic Ukraine. The world has come a long way from the imminent danger of a Russian coup in Europe’s second-largest country in February to the Kremlin’s humiliation today, in early September; yet regardless of Putin’s military failures, his country remains a nuclear superpower in need of deterrence. Thus, the West must take decisive action against Russia for Ukraine to continue winning this war. While an earlier strategy could have entailed negotiating with Putin—an approach formerly favored by this author at the war’s onset in February and advocated recently by many in the foreign affairs community—it is the wrong tone for the current state of the conflict. 

Putin has committed too many atrocities—from shelling schools to the destruction of entire cities, to forced disappearances and civilian massacres—to justify anything other than his complete defeat. He has corrupted Russia from a fledgling democracy at the beginning of the millennium to a quasi-fascist state today—quite the irony, as he claims to be fighting fascists in Ukraine. Viewing Russia as a dictatorship that democracies could still work with—a viewpoint exemplified by Angela Merkel, Germany’s former chancellor, in her trade deals with Russia at the heart of Europe’s slavish dependence on Russian gas—is no longer sustainable. Negotiation would only signal a return to former times, to business-as-usual; most of all, Putin could spin it as a victory for himself and his brand of Russian fascism, a dangerous prospect for anyone wishing for a democratic Russia. 

With Russia’s advance shattered thanks to Ukrainian ingenuity, the West should double down on its arms support for the country. American anti-air weaponry has played a critical role in stopping Russia’s air force from controlling Ukrainian skies; NATO members from the UK to Poland have also provided tanks, ammunition, guns, and drones to counter the Kremlin’s firepower. Beyond providing arms, the West must also ramp up its training activities of Ukrainian troops, particularly in offensive measures. More than six months of fighting bled Ukraine’s best battalions, and newer recruits’ training fails to match their bravery and patriotism. Moreover, years of war on the defensive—beginning with the Crimean conflict in 2014—provided Ukraine with plenty of expertise in defending but close to no insight on how to conduct offensive maneuvers. 

Finally, Western officials must also continuously reassert their commitment to Ukrainian freedom. This sounds obvious, but it plays a crucial part in shaping Zelenskyy’s mindset. The biggest threat to Ukraine’s success is overreaching: spreading their troops too wide in trying to pierce the Kremlin’s defenses around Kherson while simultaneously pushing against Putin in the north. Indeed, Russia’s obstacles in invading Ukraine—the mud, the rivers, the dense urban areas—will change into assets as it defends its conquered territories; the pendulum of war could still swing toward the Russians. As they triumph on the battlefield, it is all too easy for the Ukrainian high command to push their luck too far: to overreach to show their allies that their weapons are working miracles on the field, incentivizing them to send more. Through such foreign pressure, a careful offensive could give way to an unwise yet politically enticing expedition, with disastrous effects. Thus, the West should clarify that their support is categorical, not contingent; they are with Ukraine no matter where Zelenskyy’s troops go. 

Most importantly, it is time for the West to frame its demands against Putin—or rather, let Ukraine do so. By now, Putin has lost the advantages he once held; a quick Ukrainian defeat would have sealed NATO’s demise (many forget the alliance was proclaimed “brain-dead” by French president Emmanuel Macron in 2019), made Ukraine a pro-Russia buffer against the West, and justified his oppression at home as a precious antidote to Western feebleness. Instead, he has revealed the decrepit state of his military, enraged his most prized constituency, Russian nationalists, and unwittingly expanded NATO with the addition of Sweden and Finland, two countries that previously considered NATO membership anathema to their interests. More than ever, Ukraine now has the momentum to decisively drive out Russian forces: by the time Putin can recoup his losses for another invasion attempt, the country will have joined the NATO alliance, ending such a risk. If Ukrainians remain steadfast in refusing to negotiate with Putin if it implies anything less than total surrender, the West must honor their wishes. 

Perhaps deft diplomacy could have avoided this war in the first place, but the hour of negotiation has long passed. Today, the West must support Zelenskyy more than ever as he frees his country—warning every tyrant with expansionist ambitions that democracies when threatened, stand up for each other. 

Tolerance as a virtue

“The Inquisition is well known to be an admirable and truly Christian invention for increasing the power of the pope and monks, and rendering the population of a whole kingdom hypocrites.” So wrote Voltaire, an 18th-century philosophe, in defiance of the Catholic Church and its supposed monopoly over truth in much of Western Europe. Though the winds of intolerance in his time had abated compared to the horrors of the 16th and 17th centuries, religious fanaticism still loomed large over Europe. Just two years after our writer penned his famous Dictionnaire philosophique, a young nobleman, François-Jean de la Barre, was tortured, beheaded, and burned at the stake in Northern France in 1766. His crime: refusing to salute a Roman Catholic procession. That a couple of centuries ago an action considered so benign today entailed a punishment so severe is a testament to the progress of freedom of expression and the march of liberal democracy in the 19th, 20th, and 21st centuries. 

The democratic world today is largely free from punishments for religious criticism. Blasphemy legislation is scarcely invoked in Europe, and many countries have revoked penalties from statute books in the last decade, including the Netherlands and Denmark. Blasphemers in England, France, Germany, and Spain are not mutilated or burned at the stake as in previous centuries; the last victim of the Inquisition was in 1826. And to Americans, citizens of a country so proud of defending religious freedom that it appears in its constitution’s First Amendment, the thought of the state picking winners and losers on religious grounds is about as absurd as it gets. Freedom of religion is so ingrained in the American consciousness that many do not know that the Massachusetts Bay Colony, where Harvard university was founded, made blasphemy—“a cursing of God by atheism, or the like” – punishable by death on biblical grounds in the 17th century. 

Such progressive developments are laudable and largely unsurprising, given the rising secularization of Western countries over the last century. But blasphemy—and charges of heresy that go along with it—survive, albeit somewhat disguised. With the loss of religious faith, politics takes on a spiritual as well as temporal meaning. This is especially true in the United States. Witness the rise of the QAnon conspiracy, which bears all the hallmarks of messianism—a charismatic prophet in the form of Donald Trump, an apocalypse in the Democrats, and deliverance through the restoration of Trump as the legitimate president following President Biden’s usurpation. Though Democrats brush off cultish behavior as a right-wing phenomenon, they too are in the grips of a pious reaction. It is best seen in Critical Race Theory, an originally academic doctrine developed in elite law schools to analyze the persistence of racism despite civil rights legislation, but transformed by a new generation of activists into an essentially religious explanation of whiteness as sin, racism as the fall, and salvation achieved through fighting racial oppression. 

Polarization turbocharged by religiosity, which shifts political debates from the rational sphere to the sacrosanct—that is, to the intellectually untouchable—has led to a devaluation of tolerance as a virtue. When Voltaire wrote in the 18th century, tolerance was only beginning to represent a praiseworthy inclination. Tolerance, or toleration in the political sense, was mostly a sign of weakness—a concession that implied the state’s inability to exercise its sovereignty, to maintain unity. Faith was a gift of God to Man but Man lived in a community; as such, he was responsible not only before divine law but also civil law, formulated by the state. It was only by the late 18th century that the word received a positive connotation—a connotation at risk in today’s hyper-polarized political environment, and further jeopardized by anyone’s ability to hurl anonymous insults on social media. 

There was a time in America when Conservative Democrats and Liberal Republicans existed; there was also a time when sympathizing with the “other side” was socially acceptable, even commendable. That era has passed. The consolidation of beliefs on diverse issues such as gun control, abortion, police violence, and today—lamentably—even vaccines, has led to the creation of two Churches, replete with their own canons and cliques of theologians and lawyers to interpret them. An economy of hate fueled by contrasting dogmas puts Americans on the path to religious civil war, with each side providing its Scripture as proof of absolute Truth. The word heresy traces its origin to the ancient Greek “hairesis,” signifying choice; the American citizen in 2021 sees their freedom of choice stripped away from them, replaced by a rigid, quasi-theistic worldview independent of reason and natural right, but firmly rooted in the canons of the two Churches. 

For QAnon enthusiasts, rejecting left-wing politics does not suffice: the creed demands action. At the rally before the January 6th insurrection, their prophet demanded to “fight much harder,” to “show strength,” and the faithful obliged by looting, punching, and brutalizing their way through Congress. Similarly, today’s liberal clergy expects the individual to not merely renounce racism, but to actively eradicate the scourge through activism; salvation is a communal objective. In the words of Ibram X. Kendi, one of the movement’s popular theologians, one must be an “antiracist” or an “ally” to the activist cause. Activism entails shutting down controversial speakers on college campuses, replacing the hateful notion of “academic freedom” with the appropriate “academic justice,” codifying lists of “oppressive language”—including the term “trigger warning”—and denouncing any disagreements as heresy society must expunge. 

Repeatedly, the two faiths clash in awesome battles that appear troubling to spectators but are exhilarating to the holy warriors; the classroom often serves as the battlefield. For instance, in the past year, Republican legislators from Arizona to Tennessee have sought to curtail the teaching of Critical Race Theory in schools by banning lessons that may elicit “discomfort” or “guilt.” Under the veil of protecting students from the left-wing Church’s dogmas, the Republican Church effectively labeled as heresy the teaching of any unflattering episode in American history. It does not require much reflection to realize that educating students about the horrors of slavery and the injustices of Jim Crow falls under a loose standard of causing “discomfort” and “guilt.”

Shockingly for a country founded on principles of free expression and religious tolerance, teachers are fired in Republican states for speaking about white privilege, while universities eager to please left-wing students sack professors conducting controversial—read thought-provoking—research. The codification of proper doctrine in schools, with punishment aimed at those who stray from the orthodoxy, corrupts schools from open academies into purist seminaries. Just as every tyrant knows to silence the learned amongst his subjects, so is American society’s weakening commitment to free expression felt most in its schools. No wonder that according to a new New York Times/Siena College poll over half of Americans hold their tongues instead of voicing their beliefs due to concerns over retaliation; if schools are no longer bastions of free discourse, which places are?

The rotting of liberal views on free speech and tolerance at home poses a grave threat to America’s defense of freedom of speech as a universal human right. Two weeks ago, an outraged fanatic knifed Salman Rushdie, author of the Satanic Verses, a book interpreted as blasphemy by some Muslims, in New York City. That such egregious actions could occur on US soil should awaken those apologists in the West who seek to shroud the attack in the language of inclusivity. These cowards—what The Atlantic staff writer Graeme Wood calls “Team to be Sure”—condemn violence, but hasten to add that Rushdie’s work is a provocation. Jimmy Carter already articulated such views thirty years ago—when political correctness started to become vogue—in an op-ed in the New York Times: when the Islamic Republic of Iran’s Supreme Leader Ayatollah Khomeini issued a fatwa, or religious opinion, demanding Muslims murder Rushdie for his work in 1988, Carter stated that the writer’s “First Amendment freedoms were important” but that nevertheless “we have tended to promote him and his book with little acknowledgment that it is a direct insult to those millions of Moslems whose sacred beliefs have been violated.”

Equating the hurt feelings of fanatics with the right to free speech of individuals was unthinkable in a previous age when liberal defenses of free speech meant something. But in a time when the leader of the free world’s best universities are filled with students shutting down challenging speakers and censoring daring research, when legislators are seeking to ban teaching history in classrooms because pupils might find it “uncomfortable,” and when violence is seen as a legitimate outlet to resolve disputes between people—a belief firmly rooted in the thought of the contemporary right and even sizable parts of the far-left—perhaps it comes as no surprise. Blasphemy may not be a crime in 21st-century America, but we entertain an intolerant mindset that lets those who murder and maim on the sole supposition that they access a higher truth off the hook.

No Quiet on the Eastern Front: Why There is War in Ukraine, and How the West Should Respond

Engaging in what many considered unfathomable, Vladimir Putin, Russia’s despot, has declared war on Ukraine. In a massive invasion combining air, land, and sea forces, on Thursday, February 24, 2022, Russian troops poured in from Belarus to the north and the separatist-held Donbas region in the east, all while Ukrainian military bases and cities faced heavy shelling. As of the writing of this article, fighting rages over Ukraine’s capital, Kyiv. In this critical moment for European peace and stability, an examination of post-Cold War history will illustrate Putin’s ambitions in Ukraine and the path the West must take to respond. 

To many in the West, Vladimir Putin is the archetype of a crazed tyrant. He brutalizes those who protest against his iron rule, makes journalists disappearpoisons former spies, and jails opposition leaders after failing to kill them. Painting him additionally as a mad conqueror hell-bent on recreating the Soviet Union, whom he served as a KGB officer, hardly seems to disturb the portrait. Moreover, his recent messianism—proclaiming war to save Ukraine from “nazis” and “reuniting” the Ukrainian and Russian peoples, unilaterally erasing Ukraine’s right to self-determination—does not help observers view him in a rational light. Hence President Biden’s remarks that Putin’s efforts to “recreate the Soviet Union” explain his recent actions. Yet viewing Putin’s ambitions in Ukraine through the lens of apparent insanity distracts from the real geopolitical concerns that motivated his invasion. 

For Putin, the prospect of increasing Western influence in Ukraine is akin to an act of war, bolstered by what he views as decades of insincerity. In the early 1990s, as the Soviet Union was on the verge of collapse, American and other Western officials promised their Russian counterparts that if they dissolved the Warsaw Pact—the string of USSR satellite states including the entirety of Eastern Europe, barring Yugoslavia, as well as East Germany—NATO would cease its expansion. Indeed, George H.W Bush, the US president at the time, assured Soviet premier Mikhail Gorbachev that NATO would “not move one inch eastward.” However, such pledges evaporated as NATO welcomed almost the whole Eastern Bloc into its ranks between 1997 and 2004. Paradoxically, in this period, Russia was at its weakest militarily and economically—a point not lost on the 40 senior government officials and foreign policy experts who wrote an open letter opposed to NATO’s enlargement to President Bill Clinton, warning that the “a new line of division in Europe between the ins and outs of a new NATO, [would] foster instability.” With Russian troops moving ever closer to Kyiv, it is now easy to see why. 

Against the backdrop of an expanding NATO and growing European Union (EU), both bordering Russia by 2004, relations between the West and the former superpower took a decisive turn toward conflict after NATO’s Bucharest Summit in April 2008. While the admission of countries such as Poland and Hungary into NATO in 1999 rattled Russia’s feathers, the arrival of the Baltic states in 2004 put the Kremlin on guard—the West was now at the country’s borders, following the breakup of the USSR, which had left it decidedly weakened. Nevertheless, relations with the West stayed friendly at the beginning of Putin’s more than 20-year reign as Russian leader. The end of the 2008 NATO convention marked the turning point: NATO Allies issued the Bucharest Declaration, stating that “NATO welcomes Ukraine and Georgia’s Euro-Atlantic aspirations for membership in NATO. We agreed today that these countries will become members of NATO.” Putin predictably responded by stating that Georgian and Ukrainian membership posed a “direct threat” to Russia. 

Mere months later, Russia pretexted ethnic divides to invade Georgia. Imagining he had Western support, Georgian President Mikheil Saakashvili—having won the election against his Soviet predecessor in the Rose Revolution—tried to assert his nation’s claims to South Ossetia and Abkhazia, dominated by non-Georgian ethnic minorities. Following the Bucharest Declaration, Russia decided to recognize the claims of South Ossetian and Abkhazian separatists and intervened militarily, trouncing Georgian forces and effectively annexing the two territories. 

Tragedy repeated itself in Ukraine six years later. Facing economic collapse, in 2013, Ukrainian President Viktor Yanukovych was negotiating a deal with the EU that would bring Ukraine closer to Western Europe. Under intense pressure from Russia, he refused the EU agreement, accepted $15bn worth of Russian loans, and his government violently suppressed ensuing protests. Negotiations with parliamentary opposition and the involvement of EU officials led to Yanukovych’s agreement to stage new elections; yet, protestors rebuked the deal, some armed, prompting Yanukovych to flee the country for Russia. Witnessing the collapse of the Kremlin’s authority in Kyiv, Putin engaged troops stationed in Sevastopol, a base leased by Ukraine to Russia, to take over Crimea and, seizing pro-Russian feeling in the East, he backed the establishment of separatist states in the Donbas region, namely in Donetsk and Luhansk. These statelets have shielded Russian troops and provided one of the bases for the current invasion of Ukraine. 

That NATO and EU expansion into Russian spheres of influence would prompt a military response proved shocking to Western powers on the eve of the 2008 Russian-Georgian War, Putin’s annexation of Crimea in 2014, and today as Russian tanks are poised to move into Kyiv. However, that current turmoil in East Europe owes much to poor Western messaging should be abundantly clear by now. With NATO enlargement in the 1990s and early 2000s and the Bucharest Declaration specifically, the West seemed to signal firm support for former Soviet satellites such as Ukraine and Georgia to free themselves from Russian influence. And so they did: the Georgians asserted their claims to Russian-backed separatist regions, while the Ukrainians overthrew a pro-Russian government not once, but twice, during the Orange Revolution of 2004 and in the Revolution of Dignity in 2014. The Russians, seeing a loss of grip on their former satellites, revealed their teeth—only for NATO and the West to take to the sidelines as they swept into now independent territories to reassert their influence. 

Once more, Western leaders from President Biden of the United States to NATO chief Jens Stoltenberg, to the EU’s Ursula von der Leyen, seem taken aback by Putin’s actions. Von der Leyen, head of the European Commission, decried Putin as “bringing war back to Europe.” Stoltenberg voiced similar sentiments, announcing that “peace on our continent has been shattered.” President Biden spoke of the conflict as a personal vendetta, the fantasy of a detached tyrant: “Putin chose this war.” But Putin’s actions also reflect calculated geopolitical ambitions, not solely the whims of a tyrant. He knows the West cannot afford to intervene in Ukraine militarily, yet he faces the prospect of complete encirclement by NATO lest he relinquishes Russia’s claims to its sphere of influence in Eastern Europe and the Caucasus. Thus, he destabilized Georgia to shut down prospects of joining NATO in 2008 and is doing the same in Ukraine in response to its growing attachment to Europe. 

The great irony in the West’s dealings with Russia is that Western leaders believe their actions do not constitute balance of power politics; but those of their rivals, namely Russia, fall squarely in line with supposedly bygone ideas about spheres of influence, security concerns, and strategic interest. The root of the disconnect lies in what University of Chicago’s John Mershmeir, an expert in international relations, identified in a 2014 talk on the Ukrainian crisis as a “21st-century attitude” and a so-called “19th-century attitude.” For Western officials, when NATO and the EU seek enlargement, they do it not in the name of security interests or to increase their sphere of influence, but rather to promote what they consider benign initiatives of democratic cooperation and deeper commercial engagement. However, they believe when Russia intervenes militarily, it reveals the mindset of the Great Game—of a time when anarchy ruled international relations, and when might made right.

Thus, Western officials believe they are participating in a new international relations landscape that focuses on benign initiatives everyone can get behind—promoting democracy and free-market capitalism. But these are hardly innocent measures to their Russian and Chinese counterparts, who sense efforts to secure Western hegemony concomitant with their understanding of how international relations unfold. Thus, Western leaders have failed to realize time and time again that the Russians do not understand their language: in “democratic cooperation” and“economic partnership,” they hear the West’s attempt to increase its sphere of influence to the detriment of Russia. To the Russians, it is akin to the Chinese promising greater trade agreements with Mexico and Canada and a prospect of military partnership. Would the Americans not respond in outrage? 

Of course, Putin’s Russia deserves its fair share of blame for blatantly violating international law by invading another sovereign state. Nonetheless, based on post-Cold War history and a realistic assessment of international relations, it is equally valid that Western expansion into Russia’s sphere of influence played a role in fomenting the present crisis. Indeed, the roots of the current conflict lie in the mistaken belief that a defeated Soviet Union meant Russia would renounce its historical regional control in a 21st-century world where “spheres of influence” would no longer matter. But by expanding NATO, they instead convinced Russia that its sphere of influence remained of utmost importance, which explains Moscow’s aggressive behavior today. As Meirshmer makes clear in this lecture, the West’s military umbrella of NATO and economic heft allow it to make strategic blunders without too many repercussions. Until China becomes a peer competitor, the primary victims of diplomatic misunderstandings are the very states the West seeks to protect—including, above all, Ukraine. With NATO involvement likely to amplify tensions further, how should the West deal with Putin’s current actions? 

At present, no side has the clear advantage. Although EU, US, and British officials announce heavier sanctions by the day, “Fortress Russia” is less dependent on the West than ever before. Indeed, the country’s central bank has steadily reduced the percentage of its reserves held in dollars since 2014, Putin signed a new trade agreement with China on the day of the invasion, and the country wields control over 40% of Europe’s gas supplies. However, the recent wave of sanctions imposed by the West threaten Russia’s financial system more than ever by freezing its central bank’s foreign reserves. The central bank now sanctioned, it can no longer use euros and dollars to provide liquidity to domestic banks under sanctions, nor can it intervene in the currency market to prop up the value of the ruble. Consequences include bank runs and further isolation from the world economy, putting immense economic pressure on the lives of ordinary Russians. On the other hand, the Kremlin met the new sanctions by ordering its nuclear arsenal on “high alert,” meaning previously uncontemplated measures may appear viable. For instance, as a reprisal, Putin could cut off Western Europe from its gas—a move he had once dismissed—choking a continent already battered by high energy prices. 

Thus, the Ukrainian conflict is a war of attrition, with the West and Russia pummeling each other indirectly to see who will back down first. The focus of the vying opponents will be the capital, Kyiv. Russian occupation of Ukraine looks unlikely: 200,000 troops are not nearly enough for such a goal, and, contrary to what some Western leaders believe, Putin seeks to solidify his sphere of influence—by toppling the current government and installing a puppet regime—not recreate the Soviet Union, which would require financial and military resources far beyond his reach. In addition, although Russia far outstrips Ukraine in military strength, its definition of victory is the creation of a stable, pro-Russia Ukraine, which would require absolute Ukrainian submission. Conversely, the Ukrainians fight for national survival—they simply need to exhaust the Russians. Therefore, the West should direct its resources toward supporting Ukrainian guerrilla warfare and directing sanctions—harassing Russian supply lines, increasing the social and financial cost of fighting—all while providing economic and humanitarian aid to Ukraine so it can outlast Putin’s willpower. By doing so, the West can seize the asymmetry of the conflict as a source of strength.

Diplomatically, the West must set precise demands, avoiding the ambiguity that plagued the aftermath of the 2014 Ukraine conflict. This entails acquiescing to some of Putin’s stipulations; pragmatism is not weakness, but strength, if it means restoring Ukrainian sovereignty. First, it should make it absolutely clear to the Kremlin that Ukraine will never join NATO: that no Western leader has so much as contemplated sending troops into Kyiv to fight the Russians should demonstrate that this is not an impossible pledge to fulfill. Second, it can agree to halting NATO expansion to its 2004 borders, refusing recent rumors of Finland and Sweden joining, meeting Putin halfway in his demand to return to pre-1997 frontiers. Third, it should compromise with the Kremlin on extensive autonomy for Ukraine’s Donbas region and recognizing minority language rights, which Ukraine’s parliament had rescinded in 2014. 

Lastly, given Russia taking over Kyiv sooner rather than later—despite heroic Ukrainian resistance—the West should negotiate a diverse governing coalition that combines Ukrainian politicians friendly to the West and East-Ukraine members who are closer to Russia. The aim is the creation of a neutral Ukraine, committed to both East and West, acting as a buffer state. Regular consultations between Russian, European, and American officials are needed to settle each side’s regional security concerns and avoid future combat in the region. If Russia successfully installed a friendly leader in Kyiv after Western-supported protests ousted two previous pro-Russia presidents, the Kremlin could very well suppose itself capable of performing the same operation in Moldova. Unlike Ukraine, it has no formal plans to join NATO; however, it lies firmly in Russia’s sphere of influence yet draws closer to the West, especially after current president Maia Sandu defeated her Moscow-backed predecessor, Igor Dodon. Moreover, similarly to Ukraine, the country suffers from ethnic tensions related to the USSR breakup, and there are mounting concerns over an attack from Transnistria—a breakaway region with strong cultural ties to Russia, including Russian military presence. Thus, a mechanism to hash out disagreements in Eastern European defense policy is of the utmost necessity in mitigating tensions, as outlined in a recent “consensus proposal” by the RAND Corporation, a foreign policy think tank. 

Undoubtedly, Putin is in the wrong in this conflict. He has infringed on the sovereignty of his neighbor, subverted international norms, and posed an existential threat to Ukrainian democracy. Nevertheless, as this article hopefully showed, geopolitics remains relevant in the 21st-century. Balance of power, spheres of influence—these are realities the West must contend with in dealing with international competitors, whether it is Russia or, soon enough, China on the question of Taiwan. As with Georgia, the West gave Ukraine a false impression of support, encouraging deeper ties to stymie Russian bullying. Yet the opposite proved true: Ukraine drew closer to the West only to face Russian wrath, laying the burden most on the Ukrainian people. Only by recognizing the limits of its influence in the East will the West free Ukraine from its current state, squeezed between the West and an ever-threatening Moscow bear. Moreover, only by doing so will the Ukrainian people have the chance to decisively claim their right to self-government.

The Codified Constitution: A Trojan Horse for Despotism?

Consider these excerpts of a codified constitution in effect today. Chapter 2, titled “Rights and Freedoms of Man and Citizen,” contains the following statements. Article 17 proclaims that “Fundamental human rights and freedoms are inalienable and shall be enjoyed by everyone since the day of birth.” Article 19 declares that “All people shall be equal before the law and court.” Article 21, that “Human dignity shall be protected by the state. Nothing may serve as a basis for its derogation.” And finally, Article 29, “Everyone shall be guaranteed the freedom of ideas and speech.”

The enumerated statements recall the spirit of the United States Constitution, the French Declaration of the Rights of Man and Citizen, or the UN Universal Declaration of Human Rights. Moreover, they underpin values essential to any liberal democracy—the rule of law, free speech, and respect for individual freedoms; the constitution in question could easily be that of the United States, France, Germany, or other nations of the free world.

Surprisingly, the constitution in question belongs to Russia, a country not usually lauded by independent analysts for protecting civil liberties. The Economist Intelligence Unit (EIU), which compiles an annual ranking of countries by degree of liberal democracy, gave the nation a score of 3.31 out of a maximum of 10 in 2020, placing it in the category of “authoritarian regimes.”

Russia shows that a country may very well express liberal sentiments in its constitution without adhering to those principles in practice. Thus, a thorny question for proponents of individual liberty arises: to what extent does a codified or written constitution help or hinder individual liberty? While written constitutions provide some fundamental personal freedoms, their codified, centralized nature renders them susceptible to manipulation by autocrats. To support our claim, we will examine the topic in three parts. First, we shall define individual liberty and posit the rule of law as an interpretive framework for evaluating personal freedom while distinguishing the rule of law from the rule by law. Second, we will assess individual freedoms outlined by codified constitutions while highlighting their shortcomings in practice. Finally, using the distinction between the rule of law and rule by law, we will demonstrate that although a codified constitution erects some bulwarks against abuses of power, its nature permits its compatibility with rule by law but not necessarily the rule of law.

Establishing a definition and framework for conceptualizing individual liberty proves fundamental to evaluate the degree to which codified constitutions safeguard personal freedom. This essay, based on Isaiah Berlin’s famous distinction in “Two Concepts of Liberty,” will take individual liberty as “negative freedom” (Berlin, 1969). That is, it shall ignore so-called “positive freedom,” defined as the power of self-realization or self-mastery, in favor of a “negative” conception of liberty—the freedom from interference by society and government. The “negative” notion of liberty enshrines a personal sphere free from state or societal coercion (e.g., Constant, 1819). In this classically liberal or libertarian sense of the word, the greater the individual citizen’s domain of non-interference from the state, the greater their liberty (Berlin, 1969). This identification of liberty will form the basis for our appraisal of a codified constitution’s protection of individual liberty.

Even when narrowed to negative liberty, liberty is too broad and ambiguous of a concept for analysis; it needs an interpretative framework. Therefore, this essay will rely on the idea of the rule of law as a framework for evaluating liberty. In short, we will take the classical position, eloquently stated by the French eighteenth-century philosopher Voltaire, that “liberty consists of depending only upon the law” (Voltaire, as cited in Hayek, 1960). To be free, therefore, means the state may only exercise its coercive apparatus against the individual following the pronouncements of recognized law (Hayek, 1960).

Furthermore, there is a distinction between the rule by law and the rule of law. The rule of law traditionally signifies that law—the rights, duties, and rules regulating the lives of citizens in a state—occupies a status above politics. The independence of the law entails two principles. First, the law must be applied to everyone equally, no matter how powerful or weak. Another criteria, more modern in origin, is that the law must secure the rights and liberties of citizens; the law must protect the individual from overreaches of the state by legally recognizing an individual’s sphere of independence, encompassing private property, freedom of speech, and freedom of religion. Rule by law, in contrast, refers to the instrumentalization of law for political purposes (see, e.g., Rajah, 2012). The law may exist as opposed to an arbitrary state of nature—providing, therefore, some security for the individual—but the sovereign is above legal norms and directs their application. The suppression of individual freedom takes on a “legalistic” form: the state may not subject citizens to despotism in the sense of arbitrary punishments, but it can deploy the law to curtail individual liberty in the negative sense.

To what extent, then, does a codified constitution protect individual liberty? A codified constitution provides some necessary bulwarks against abuse of state power. By definition, it represents the supreme expression of the law; in theory, no magistrate, legislator, or politician may go against its assertions, and as a document, it can only undergo modifications through a specified legislative process as opposed to the whims of a despot. A constitution can thereby secure individual liberties as legal rights exercisable against the state—such as in the first ten amendments of the United States constitution—moving the protection of personal freedoms from a purely moral framework into a legal one. It seems, then, that a codified constitution fulfills the two fundamental tenets of the rule of law. Subjecting government to the law reinforces the law’s universality, and the inclusion of rights against the state transfers individual liberties from a uniquely moral consideration into a legal one.

Nevertheless, the existence of a codified constitution in no way guarantees restraint of authority. On the contrary, in the hands of manipulating autocrats, written constitutions can extend power despite being designed to curb its abuse. Indeed, a codified constitution in an authoritarian state provides a veneer of liberal legitimacy for illiberal actions; it may serve as a vehicle for despotism instead of a check on power. For example, Russia’s president, Vladimir Putin, modified the country’s constitution in March 2020 to scrap a rule barring anyone from serving more than two consecutive terms. Following the changes, ratified by the passive Duma or parliament, Putin may still preside over the nation until 2036, when he will be 83, effectively legitimizing rule for life.

Similarly, in Turkey, another country that scores poorly on the EIU Democracy Index, president Recep Tayyip Erdogan passed a wave of constitutional amendments in 2017 that provided him an array of unchecked powers. Erdogan justified his measures through a referendum marred with allegations of ballot mismanagement and illegal changes to the electoral law by his government. Protestors who rallied against the referendum were swiftly arrested and prosecuted.

It is telling that today even strongmen like Erdogan and Putin continue to work in constitutional frameworks despite their blatant rejection of liberal norms. In the twenty-first century, the conditions people mandate for political legitimacy are different from the seventeenth. Sovereigns cannot convincingly rule by divine right but must obey—however deceptively— standards of citizen consent recognized by post-war international human rights charters such as the UN’s 1948 Universal Declaration of Human Rights. In this context, superficial adherence to liberal values through a codified constitution can act as a tool for authoritarianism instead of curtailing misuse of power. By establishing a framework that theoretically binds the governed and the governor, the state can restrict individual liberty while claiming popular consent. An authoritarian regime can construct an entire edifice of constitutional authority to support its illiberal actions. Such is the case with the Islamic Republic of Iran, which suppresses political dissidents, stifles the press, and detains secularists and religious nonconformists—all in the name of its constitution (see, e.g., Tamanaha, 2004). Far from opposing despotism, a codified constitution can fall prey to manipulation legitimizing despotism. Returning to the earlier distinction, codified constitutions can engender rule by law—where the state recognizes the law but lies outside of its stipulations and wields the law to its benefit and the detriment of individual liberty, as witnessed in the cases of Russia, Turkey, and Iran.

The question of why codified constitutions are all too often vectors of autocracy as opposed to safeguards of freedom lies in their compatibility with rule by law and inability, by themselves, to satisfy the rule of law. In the 1970s, the Austrian economist and political philosopher Friedrich Hayek distinguished at length the “rule of law” from the “rule of legislation” (Hayek, 1973). For Hayek, legislation is the product of top-down decision-making by politicians. He contrasts the imposition of rules characteristic of legislation with the evolutionary nature of common law: the former originates by decree, while the latter emerges over centuries through custom and social interactions. While Hayek’s reservations about legislation do not concern us here, his approach of separating law that stems from decree with the law that evolves through custom explains the limitations of codified constitutions.

In codified constitutions, constitutional authority revolves around a single piece of legislation: the constitution. Thus, if an aspiring despot can muster a legislative majority or supermajority, depending on the requirements, they can modify the constitution for their purposes, creating a situation of rule by law. Accordingly, even with the doctrine of separation of powers present in almost every modern codified constitution, which aims to secure the independence of the law, an empowered executive can wield power unchecked if they control the means to change the constitution. Contrast this with constitutional authority in uncodified constitutions, such as the United Kingdom. In the UK, constitutional law considers acts of parliament such as the 1689 Bill of Rights and a wide range of common law, influenced by centuries of custom. To instrumentalize such a constitution for political purposes—creating a situation of rule by law instead of the rule of law—proves inconvenient since it would require revoking previous charters and acts of parliament, as well as challenging centuries of common law precedent. Even more important than having a “separation of powers,” there exists a separation of constitutional authority; constitutional legitimacy is dispersed in such a system, making it hard for would-be authoritarians to extend their reach while claiming to respect the popular will embodied in the constitution.

The inability of codified, written law to secure individual freedom was commented on by Aristotle almost two millennia ago: “a man may be a safer ruler than the written law, but not safer than the customary law” (Aristotle, Politics). Although codified constitutions today detail fundamental individual liberties and design governing institutions to be independent, they remain centralized documents: the ability to control the constitution entails the power to rule unchecked. The law can thus fall prey to political abuses, creating a situation of rule by law instead of preserving its independence—the rule of law.

All too often, liberals persuade themselves that the guarantee of liberty comes with the proclamation of a constitution. But as Putin’s Russia, Erdogan’s Turkey, Khamenei’s Iran, and countless other examples testify, securing liberty means going beyond constitutionalism. Other institutions—for example, a strong common law tradition in the United States, a free and unmolested press in Western Europe—are necessary for individual freedom to prosper. Just as John Stuart Mill showed in On Liberty that democracy can be a recipe for despotism as much as a source for freedom, liberals should remember that the same lesson stands true for codified constitutions.

References:

  1. “Constitution of the Russian Federation.” Retrieved June 05, 2021. http://www.constitution.ru/en/10003000-03.htm
  2. “Democracy Index 2020: In sickness and in health?” The Economist Intelligence Unit. Retrieved June 05, 2021.
    https://www.eiu.com/n/campaigns/democracy-index-2020/
  3. Berlin, Isaiah. (1969). “Two Concepts of Liberty.” In Four Essays On Liberty. Oxford University Press.
  4. Constant, Benjamin. The Liberty of Ancients Compared with that of Moderns. Online Library of Liberty. Retrieved June 20, 2021. (Original work published 1819) https://oll.libertyfund.org/title/constant-the-liberty-of-ancients-compared-with-that-of-mo derns-1819
  5. Hayek, Friedrich. (1960). The Constitution of Liberty. University of Chicago Press.
  6. Rajah, Jothie. (2012). Authoritarian Rule of Law: Legislation, discourse and legitimacy in Singapore. Cambridge University Press.
  7. (2020,03,14). “Russia’s president reluctantly agrees to 16 more years in power.” The Economist. https://www.economist.com/leaders/2020/03/12/russias-president-reluctantly-agrees-to-1 6-more-years-in-power
  8. (2017,04,22). “Recep Tayyip Erdogan gets the power he has long wanted—at a cost.” The Economist. https://www.economist.com/europe/2017/04/22/recep-tayyip-erdogan-gets-the-power-he- has-long-wanted-at-a-cost
  9. Tamanaha, Brian. (2004). On The Rule of LawHistory, politics, theory. Cambridge University Press.
  10. Hayek, Friedrich. (1973). Law, Legislation and Liberty, Volume 1Rules and order. University of Chicago Press.
  11. Aristotle (1988). The Politics and Constitution of Athens (Everson, Stephen, Trans.). Cambridge University Press. (Original work published ca. 350 BCE)
  12. John Stuart Mill. (1989). ‘On Liberty’ and Other Writings. (Collini, Stefan, Ed.). Cambridge University press. (Original work published 1859)

The Danger and Promise of Govcoin

Central bankers are hardly the radical type. Most of their job lies in placating the worries of countless investors and consumers, stabilizing markets and the broader economy—hardly a recipe for disruption. Yet from the marble hallways of the Fed to the elegant conference rooms of the ECB, you will find the inklings of a transformative new way of looking at the central bank’s role in handling money. Welcome, denizens of the 21st-century, to the age of govcoin. 

To understand why these government cryptocurrencies are so transformative, observe how banking currently works. Since the 17th-century, banks have fulfilled their roles as keepers of deposits and allocators of credit through essentially the same system, termed fractional reserve banking. In sum, since banks figure that most depositors will not withdraw their deposits at once, they can use said deposits to loan money to consumers and businesses. As a result, the former will buy products, and the latter will sell them, leading to more deposits. In this way, banks play a crucial role in storing and allocating credit and creating new money as deposits. 

When fractional reserve banking first developed in early modern Europe, it rapidly led to tremendous economic advances: more credit allocated for commerce, industry, and enterprise of all kinds, which powered Europe to unprecedented prosperity. But contemporaries soon noticed the system was not without its flaws. Consider, for instance, the consequences of a calamitous natural disaster or a financial crisis on the system: would people not rush to withdraw their deposits, spelling ruin for the banks? The answer to this question shaped modern central banking.

Already by the end of the 17th-century, England and Sweden had adopted central banks as “lenders of last resort”—emergency banks, for banks. By the 19th-century, the reasoning underpinning central banks was laid out by Walter Bagehot, a banker, and journalist, in his seminal Lombard Street. Published in 1873, at a time when currency was backed by gold, the book neatly details the logic behind central banks: instead of all banks needing to hold sizable gold reserves, an inevitable drawback to their profitability, one “central” bank would store gold while commercial banks would take deposits and issue loans. So even as commercial banks create more money than gold in their reserves, if people rush to withdraw their deposits in a time of crisis, the central bank could still bail out the commercial banks with its gold reserves. Though central banks today hardly store gold like Bagehot’s 19th-century Bank of England, the system persists with fiat paper currency taking over the role of precious metals. Commercial banks must hold a certain amount of paper currency, which is directly issued by the central bank, replacing gold. 

The idea of govcoin—government cryptocurrencies—promises to upend banking by effectively killing banks and replacing them altogether with the central bank. Since most money in circulation comes from loans and deposits, central banks have limited control over the money supply, constraining their efforts to jumpstart the economy in a downturn. Moreover, the most vulnerable citizens often have the least access to banks; 6% of Americans, amounting to around 14 million people, are unbanked, according to a report by the Federal Deposit Insurance Corporation (FDIC). The data are even gloomier in developing countries, with India alone having more than 190 million unbanked adults, which makes access to credit difficult—hampering commercial opportunities and welfare schemes. Indeed, during the coronavirus, the US government found out that many of its most vulnerable citizens could not cash in on their stimulus checks, as they lay outside the banking infrastructure that serves most US citizens. 

With govcoins, citizens could deposit money directly with the central bank. Any citizen would have access to credit, and the central bank could exercise more direct control over the economy. The latter sounds appealing to many who believe that the modern economy needs a radical change to face climate change; think of the Fed supporting decarbonization by doling-out credit to green entrepreneurs, or even a system of digitally exchangeable carbon credits directly pegged to the supply of govcoins. In addition, deposits would be backed by central bank reserves, removing the risk of bank runs plaguing fractional reserve banking. Others think it could stimulate entrepreneurship by expanding access to credit, reducing inequality in the process. Moreover, govcoins would reduce banking and transaction fees—a huge benefit to consumers. 

But even as decarbonization, a readier response in a crisis, and universal access to credit sound attractive, govcoins also harbor immense danger. They threaten to tie all commercial ventures to the central bank by killing banks, replacing private investors with government bureaucrats. Such a system would pose political dangers to liberty and endanger the vitality of free economies by sidelining private credit. As the famed economist Friedrich Hayek noticed in the 1940s—when statism’s rise appeared inexorable—the power of free markets lies in their decentralization. The army of bankers and venture-capitalists that fuel the engines of commerce possess far more information than a handful of bureaucrats. Indeed, transforming central bankers into government planners runs the risk of allocating credit on political, rather than economic, lines—a poor recipe for both freedom and economic growth. 

What to do, then: how can banking utilize the capacities of cryptocurrency to expand access to credit without entrenching economic power? The answer may lie in taking a more integrative view of the money economy. Several cryptocurrencies currently exist on the market, but many are poorly qualified as money; if Bitcoin, for example, experiences vast shifts in value daily and most retailers refuse to accept it for payment, it hardly functions as currency. Govcoin solves the problem by issuing crypto in the name of the state. But a swath of crypto alternatives pegged to “real” money—such as the dollar—are emerging, termed stablecoins. These currencies are still in their experimental phase, with regulation surrounding them scarce, though there are calls for new legislation. Such an approach best suits free societies: based on crypto, these new currencies could lower transaction costs much like Bitcoin promises to do and prevent the danger of political monopoly over the economy posed by govcoins. Moreover, if treated as banks—something America’s regulators are rightfully keen to do—they could help the private sector maintain its vital role in a future crypto-based money economy. 

Experimentation around govcoin is still in its infancy, yet the e-dollar, e-yuan, and e-euro are coming—accompanied by a tide of creative destruction. Such innovation is not only radical but in many ways welcome. Nevertheless, to stay open and transparent, market economies should monitor their deployment carefully lest they monopolize the all-important role of credit allocation. In particular, policymakers and entrepreneurs ought to favor the legal integration of stablecoins into the money economy to mitigate the potential dominance of govcoins while responding to the shortfalls of private cryptocurrencies, such as Bitcoin. Central bankers will have to walk a fine line between peril and promise. They should embrace the challenge.

The Corporate Statesman

Deep in the Brazilian section of the Amazon rainforest lie the ruins of a lost settlement: Fordlandia. Ostensibly set up as a rubber production plant in the late 1920s, Henry Ford, founder of the Ford Motor Company, intended for the site to become a symbol of his mission civilisatrice, a utopia that embodied the social ideals of his enterprise. The spirit, life, and organization of the settlement were supposed to reflect higher, moral aspirations. A water tower was erected in 1930 to highlight progress and modernity; managers were instructed to keep the site alcohol-free. Social progress, not business, was the settlement’s larger goal. Ford would proudly proclaim: “We are not going to South America to make money, but to help develop that wonderful and fertile land.” 

Few today look back at Henry Ford as a man to emulate. Fordlandia was a failure—the site was abandoned by the 1940s and the project discontinued by Henry Ford’s grandson and successor, Henry Ford II—demonstrating the incompatibility of good business and good governance. And although he revolutionized the automobile industry and was famous for his comparatively generous wages, many of the man’s social ideals were revolting at best. His anti-Semitism is on full display in The International Jew, a series of pamphlets touting a Jewish conspiracy infecting America that inspired the likes of Adolf Hitler.

Fordlandia may be lost to time, and Ford’s social thought considered a mere unfortunate episode in the history of ideas, but the paragon of the “corporate statesman” lives on today. More and more corporations are assuming a political and social mandate, shifting the primary goal of business from producing long-term profit for its shareholders—a doctrine popularized by Nobel-winning economist Milton Friedman—toward an inclusive form of capitalism encompassing the values of “stakeholders,” defined as the interests of society, workers, and community. Far from being exiled to the pages of history, the corporate statesman makes an emphatic return in 21st century America. Though businesses pledging to respect workers and consumers is undoubtedly good,  corporations pursuing social ends risks meddling businesses and politics to a worrying extent.

That the corporate statesman should make a return today may come as a surprise. After all, the paternalistic capitalism evoked by the “stakeholder” approach seems long gone, a product of the 1940s and 1950s. The rise in deregulation, the dominance of finance, and the vigor of neoliberalism in both left and right wing circles in the 1980s and 1990s seemed to signal the triumph of Friedman’s shareholder primacy theory. In many ways, however, the old theory never disappeared. In tech, for example, superstars such as Google and Apple started massive campus headquarters coupled with food and housing, recalling the company towns of industrial behemoths like DuPont and Ford. Healthcare and other social benefits remain tied to employment ever since the 1940s. Few entrepreneurs, even in the time of so-called shareholder primacy, declared that the sole purpose of business was to make profit no matter the cost. But leaders of major corporations after the 1980s never sought a political mantle. CEOs such as Steve Jobs may have claimed to transform the world, but the ideal of the businessperson as a guardian of society’s interests was relegated to a bygone era, the days of Henry Ford. That is, until now.

Today’s corporate statesman extends their influence beyond mere employment toward political activism. The Business Roundtable, an association of prominent CEOs, issued a statement in 2019 on the purpose of the corporation, which argued that corporations have a duty to support workers and their communities in addition to shareholders. Major companies from Amazon to JPMorgan Chase made public remarks on police brutality following the murder of George Floyd, an African American man, at the hands of a white police officer last May. Social media platforms such as Twitter, YouTube, and Facebook are starting to police political speech deemed offensive or harmful. On the topic of a worrying new set of laws making it harder to vote passed by Georgia’s Republican state assembly, Coca Cola and Delta Air Lines released statements criticizing the legislation.

Speaking in favor of voting rights and condemning police violence are laudable acts; it is also true that corporations must pay attention to the needs of their workers and the community that surrounds them to do good business. Nevertheless, there is a difference between supporting these measures individually and taking on the collective role of interpretating and enforcing society’s perceived needs. A good example of the former was Coca-Cola’s organization of a dinner for Martin Luther King after his Nobel peace-prize victory in 1964. Following the ceremony in Oslo, Norway, the civil rights leader returned to his hometown of Atlanta for a celebratory dinner organized by a group of progressive intellectuals and sponsored by the town’s mayor. The city’s white aristocracy, however, spurned the offer to show up. It was only Coca-Cola’s subtle messaging—the company issued a statement calling it an “embarrassment for Coca-Cola to be located in a city that refuses to honor its Nobel prize winner”—that the event was saved and attendance skyrocketed.

Some companies today echo Coca-Cola’s approach. JP Morgan Chase, a bank, said it will contribute 30 billion dollars’ worth of loans toward Black and Latino communities to fix “banking’s systemic racism.” Such measures can help mitigate racial inequality and are a reflection of an open society where business considers it has a role to play in advancing social progress. However, corporate statesmanship—where businesses claim to speak for and direct their resources toward the “social good”—can backfire in three important ways. First, it risks looking like corporate grandstanding that trivializes real issues. Second, it shields corporations from scrutiny that should exist in a competitive market system. And finally, it intertwines business and government to a worrying extent, entrenching power.

Start with grandstanding. While JP Morgan Chase, Amazon, Delta, and a host of other companies claim to support, say, racial progress, their actions in reality often fail to live up to lofty ideals. Just as Amazon provided 10 million dollars to organizations supporting “justice and equity,” such as the NAACP, they now face a lawsuit for “systemic racism” in corporate offices. Google, another tech giant, faces disapproval following the resignation of a Black researcher, who was allegedly driven out due to criticism of the company’s AI hiring algorithm, which she claimed was biased against minorities. And even as Delta’s PR team issued eloquent statements on Georgia’s voter suppression bill, pointing out the harm done to Black communities especially, the corporation remains a member of an oligopolist industry that benefits from government subsidies yet restricts choices for consumers all while imposing high prices. A corporate stance on racial justice that is restricted to so-called “diversity training” for staff and thundering rhetoric that falls short of reality risks creating apathy about fundamental issues such as racism, sexism, and inequality. If racism is just another theme in a corporate ad campaign—think of the dreadful Kendall Jenner Pepsi advertisement that ran in 2017—does it not lose its gravity?

Another worrying side to the corporate statesman is the blurry language used to sell social activism—language that can be used to derail scrutiny. Take the term “stakeholder capitalism.” What does it mean that a business will help fulfill the needs of all its “stakeholders”? Does it mean they will provide job security for workers? Maybe—but Delta, a fervent champion of corporate statesmanship, fired close to 20% of its workforce during the pandemic. Does endorsing stakeholders mean companies will prioritize the needs of consumers over their own? Perhaps, but that seems hard to square with the recent lobbying blitz of the tech sector after Congress introduced new antitrust legislation—after all, if companies elevate the public good over their own, why do they still spend tens of millions of dollars more on lobbying than on social causes? Claims of supporting stakeholders over profits are more PR than substance—efforts to placate growing corporate criticism by emphasizing social activism.

Finally, corporate statesmanship dangerously consolidates power in the hands of the few by allowing corporations to define social progress and inviting them into the political sphere. Concentration of power—especially political power—is almost never desirable. This thinking underpins key elements of liberal democracy today, from constitutionalism to independent judiciaries. So why would we want a handful of CEOS to set the terms for social debate, and—even more worryingly—act as executives of the public will in social affairs? Corporations aiming to fight racial justice and advance voting rights will hardly perturb liberals. But letting corporations frame the social debate provides the legitimacy to take actions inimical to champions of freedom in the name of social progress. As an example, look at Facebook and Twitter, which grant themselves the freedom to police speech they deem socially “offensive” on their platforms, often with little due process or clear rules, posing a threat to free speech in what has become the new public sphere.

Giving entry to businesses in political and social debates might also lead to an uncomfortable alliance between business and politics, where legislation protects business against free market competition as long as it supports government policy. Before the 1979 Airline Deregulation Act, carriers were given virtual monopolies over certain routes as long as they stood as national champions; today, large chipmakers could secure legal support if they refuse to sell to American rivals. Mixing business and social issues could lead to a relationship that proves symbiotic for corporations and the state at the expense of competition and consumers.  

Businesses often make our lives easier—sponsoring innovation, developing useful products, and pushing forward the boundaries of the possible. But setting the terms of social debate should be left to individuals and the door between politics and businesses must be firm, not revolving. After all, Fordlandia lies in ruins while today’s liberal democracies continue to  demonstrate the advantages of keeping business out of politics.

Working from Home: A Solution to Structural Inequalities?

The typical worker before 1800 toils on a farm. By 1900, they labor in a factory. Come 2021—they work from home. The Covid-19 pandemic has fundamentally redefined the workplace. While before the coronavirus swept across the world, only 20% of Americans capable of working from home did so all or most of the time, the number today reaches 71%, according to a Pew Research poll from last December1. Moreover, over half of those able to work from home—51%—answered  they wish to do so even after the pandemic ends. Further, in February 2021, almost a year after the  tight lockdowns at the beginning of the pandemic, a Gallup poll revealed fully 56% of Americans  were “always” or “sometimes” working from home2.  

Since the rise in working from home appears to be here to stay, one might ask whether this reshaping of the nature of work will engender long-term economic impacts. More precisely, will the long-run equilibrium quantity of output in the US economy shift due to working from home?  Taking a long-run model of the American economy, the answer seems to be yes: the expansion of  working from home will lead to an increase in the long-run equilibrium quantity of output. Both  rises in the availability of resources and productivity signal such an outcome.  

A primary obstacle to long-run economic growth in the United States is the effective isolation  of a large portion of the labor force because of the economy’s structural gender and housing inequalities.Two groups in particular—mothers and young people—find themselves in a  disadvantaged position that hinders their ability to integrate into the workforce, depriving the  economy of valuable labor. 

The lack of adequate external support for child care in the US forces many mothers to choose between becoming full-time caregivers and participating in the labor force. The Center for  American Progress, a think-tank, spoke in 2019 of a “clear connection between access to  affordable, quality child care and labor force participation—especially for mothers,” illustrating  the dilemma between working and caregiving3. The expansion of opportunities for working from  home may help to mitigate the issue. Though federal child care legislation would play a large part, today’s structural change in work can help mothers balance caregiving and working. Indeed, working from home proves much more flexible than having to go to the office daily; as a result, it is unsurprising that a Gallup survey from 2016 showed that 54% of working moms preferred work from home compared to the office4. Working from home can therefore improve both labor force  participation and productivity among mothers in the US economy.  

Working from home offers benefits to young people, too. With home and rent prices rising by 61% and 73% respectively since 2000, many millennials find themselves locked out of city centers with lucrative jobs5. Around 40% of them, according to CNBC, receive help from parents for everyday expenses such as rent, demonstrating the financial strain caused by the current real-estate market5. But with working from home, young people can work from outside of town—avoiding  costly housing and lengthy commutes. Through working from home, it seems both mothers and  millennials—two groups particularly harmed by structural disadvantages in the US economy— may better integrate into the labor market, leading to an increase in the availability of labor and productivity, and thus economic output.  

To be sure, working from home is not a panacea for the US economy’s embedded gender and  housing inequalities. Though working from home can benefit mothers, other measures, such as  federal child care legislation, would be equally useful. Besides, growing housing costs across the board may make living out of town just as expensive as city life, laying a burden on young people.  And there remain doubts on whether productivity experiences a boost or weakens when working  from home. Nevertheless, it does seem that a rise in opportunities to work from home will lead to  an increase in the economy’s long-run equilibrium quantity of output by improving the integration of disadvantaged  segments of the US labor pool into the workforce, particularly young people and working mothers. In a time plagued by uncertainty, surely that merits cheer.  

References:

1. Parker, Kim; Horowitz, Juliana; Minkin, Rachel: “How the Coronavirus Outbreak Has – and Hasn’t – Changed the Way Americans Work.” Pew Research Center, December 9,  2020:  

https://www.pewresearch.org/social-trends/2020/12/09/how-the-coronavirus-outbreak has-and-hasnt-changed-the-way-americans-work/ 

2. Saad, Lydia; Hickman, Adam: “Majority of U.S. Workers Continue to Punch In  Virtually.” Gallup, February 12, 2021: 

https://news.gallup.com/poll/329501/majority-workers-continue-punch-virtually.aspx

3. Schochet, Layla: “The Child Care Crisis Is Keeping Women Out of the Workforce.” Center  for American Progress, March 28, 2019:  

https://www.americanprogress.org/issues/early

childhood/reports/2019/03/28/467488/child-care-crisis-keeping-women-workforce/ 

4. Bloom, Ester: “More than half of working moms would prefer to be at home, survey  finds.” CNBC, November 4, 2016: 

https://www.cnbc.com/2016/11/04/more-than-half-of-working-moms-would-prefer-to be-at-home-survey-finds.html 

5. Carter, Shawn: “Here’s the big reason so many young people need mom and dad to pay  their rent.” CNBC, May 24, 2018:  

https://www.cnbc.com/2018/05/24/high-housing-costs-mean-young-people-need-family help-with-rent.html

Vaccine Economics: The Tragedy of the Commons

More than a year since the world recorded the first coronavirus cases in Wuhan, China, the Covid-19 pandemic shows little signs of slowing down. The United States, which, at the time of writing this article, records over 70,000 cases a day, is expected to pass 500,000 deaths by the end of February—more than the nation’s casualties during both world wars. In Europe, case counts at the start of the pandemic in March and April appear trivial compared to today’s figures. Moreover, in Latin America, a new variant of the virus emerged in Brazil, which worries scientists even more than other recent mutations, such as in the UK and South Africa. 

Despite the relentless persistence of Covid-19, the development of multiple vaccines since November of last year offers much hope. Vaccines from Moderna, Pfizer, and Oxford/AstraZeneca are already used across North America and Europe. Israel has rapidly inoculated over 80% of its elderly population—with early case data showing promising results. Other countries, such as India and Japan, are ramping up their efforts. Special credit should also go to Britain, on track to vaccinate anyone who wishes the jab by the end of June. According to some experts, such as Dr. Anthony Fauci, director of the US National Institute of Allergy and Infectious diseases, rising vaccination might lead to some semblance of normality returning before the end of this year.

However, the success of the world’s vaccine rollout hangs in the balance. So far, the danger lies less in the technology behind the vaccine and more in its global distribution, which rests on economic nationalism instead of cooperation. An unjust and inefficient allocation of doses will shake citizens’ faith in government, foster hostility between states, and—most importantly—undermine efforts to halt the spread of Covid-19. Therefore, a careful look at vaccine economics is crucial to the world’s collective challenge of dealing with the pandemic.

The distribution of vaccine doses between different countries evokes a problem often studied by economists: the tragedy of the commons, a situation that arises when individuals—in their self-interest—exploit shared resources to the detriment of others. Pioneered by William Forster Lloyd, a 19th-century British economist, the idea was initially applied to describe overgrazing on common, collectively owned land. In the absence of legal constraints on individual usage, a farmer could choose to put more of his cattle on the land than his neighbors. But his cattle would graze land also used by other farmers, meaning their herds would be left disadvantaged. The process could eventually lead to the destruction of the initial resource: farmers compete to put more and more of their cattle on the land, leading to overgrazing. By acting unconstrained in their interest, the farmers risk imperiling their livelihood by degrading collectively owned land.

Today’s distribution of vaccines poses similar challenges. Without any legal barriers between states and purchasing doses, the wealthiest nation often finds itself with the most jabs—regardless of population size or the severity of its situation. Thus, in December, Canada had a tally of doses four times its population, while no sizeable African country has yet started inoculating its population so far (the continent’s richest nation, South Africa, will begin this month). The discrepancy in injections between rich and poor states explains why millions of doses have already been administered in prosperous countries such as Britain and the United States, while few in Latin America, Africa, and the Middle East have received a jab. Indeed, according to one estimate by Duke University, high-income countries currently possess 4.2 billion doses, while low to middle-income states only hold 670 million—over six times fewer vaccines.

This so-called “vaccine nationalism” will only hamper efforts to curb new cases: rich countries buying disproportionate quantities to vaccinate their populations reduces the number of vaccines for poorer, less developed countries, which will face more significant case numbers. Since Covid-19 spreads exponentially and vaccinating the population until it reaches herd immunity takes time, these new cases will soon find themselves in rich countries as well, burdening hospitals and the economy. As in the case of the farmers, the exploitation of a particular resource in self-interest by an individual actor results in a situation that leaves everyone worse off.

Vaccines, of course, are not commons; they are privately owned, produced by private enterprises. Moreover, while the quantity of land remains fixed, the supply of vaccines can increase to meet global demand. Nevertheless, treating them as a shared resource would ultimately result in more efficient vaccine distribution; for the moment, vaccine production cannot yet meet the rise in Covid-19 cases, leading to inequalities in access to jabs. Covax, an international inoculation program sponsored by the World Health Organization (WHO) and a host of multilateral organizations and countries, aims to help poorer countries develop, purchase, and administer vaccines for their populations. The scheme provides a good place to start but is limited in its means: so far, it has not started distribution, and the first phase of its vaccine rollout plan includes only 330 million doses for 145 countries; by comparison, the UK alone has procured over 400 million doses. Another issue at hand is the presence of rich, self-financing countries in the first phase of the Covax vaccine distribution, such as Canada and New Zealand, which will benefit from doses that should be going elsewhere. As a dizzying series of mutations appear in Latin America, Africa, and Europe, it is unlikely that Covax’s distribution plan will be able to keep up with the disease’s spread.

In hindsight, the WHO could have cooperated with private companies to create a “vaccine credits” system to inoculate the world’s population equitably. Credits could have been allocated based on population size and financial means, with India and Nigeria securing more credits than, say, Canada or Liechtenstein. The credits would then be used to acquire a certain quantity of vaccines, much like how carbon credits provide the right to emit one ton of greenhouse gas. But in today’s world, with fragilized multilateral agreements and nationalism inflamed by the pandemic, no supranational organization could effectively distribute vaccines on a global scale. Besides, part of the incentive behind the various vaccines on the market or in development comes from the financing of big, rich countries like America, which spent 18 billion on different vaccine projects in its “Warp Speed” project during the Trump administration, for its population’s use. If the US were not spending money to vaccinate its citizens but to fund a global initiative, it would undoubtedly have spent far less, hampering efforts to create a vaccine. Finally, whether in rich or poor states, politicians have a political and moral duty to uphold the needs of their constituents. If a rich country buys a disproportionate amount of vaccines, it is hard to argue that its government acted unjustly or unfaithfully, even if it might hurt the chances of poorer nations.

Assuring a more equitable rollout of vaccines rests on the actions of both governments and supranational organizations. Multilateral institutions, such as the WHO, must prioritize lower to middle-income countries in their inoculation schemes, including Covax. Though any country can request Covax’s help, prosperous states like Canada—with the largest amount of vaccines per capita—should be far lower on the list than countries in dire need of financial aid. The UN and other multilateral organizations should also encourage benevolence among rich countries, expressed through funding programs like Covax or directly sending jabs as humanitarian aid. The rapid development of a coronavirus vaccine is no small cause for celebration. But rich countries should remember the plight of others when procuring vaccines, lest the perennial tragedy of the commons makes an appearance once again.