Haiti has the lowest coverage of electricity in the Western Hemisphere, with only 37.9% of the population having regular access.[i] The energy sector in Haiti is broken by any modern standard, with frequent interruptions in service making it impossible to rely on the public utility to provide enough power to even maintain a freezer to preserve food. Something relatively unique to Haiti, however, is the rampant theft of electricity, with half of all residents being connected to the grid illegally; a problem that a public utility rarely has to contend with.[ii]

It is concluded that these problems stem from a failure of the government of Haiti: a complete lack of the institutional capacity needed to provide a public utility to the country. This problem results in much of the population, including hospitals and other major institutions, relying on generators which are highly inefficient, negatively affect the environment, and make the country as a whole more susceptible to volatile oil prices. Moreover, a decrepit electricity generation and distribution network creates the conditions that allow for 54% of the electricity that is distributed to be stolen or lost, while the world average is only 8%.[iii]



            Not only contained within the energy sector, the Haitian government can be characterized by an astonishing lack of institutional capacity. This is largely because Haitian politics have historically been so unstable, with coups being a regular occurrence and widespread corruption being an accepted part of every day life. Dictators have regularly stolen millions of dollars from the Haitian treasury, straining the Haitian fiscal situation even more than it already is. Thus, one can see that government failure in the energy sector is not unique in the context of the broader governmental apparatus. Insofar as this is true, the shortcomings of Haiti’s past have largely shaped the present, and Haiti is still playing catch-up to the rest of the world. Institutional weakness as it pertains to the energy sector is as follows:

  • Loss of technical know-how: In 2005, the Secretary for Energy, Mines, and Telecommunications (SEEMT) was eliminated, with the Ministry of Public Works, Transport, and Communications (MTPTC), as well as the Bureau of Mines and Energy (BME) supposedly taking its place. The SEEMT was previously tasked with the creation of energy policy, enhancing the electrical grid, and maintaining current systems, thus, the SEEMT worked in a policy and technical capacity. Because there was no effort on the part of the Haitian government to integrate some of the human capital possessed within the SEEMT with the institutions that were assuming its responsibilities, a great deal of technical know-how was lost in the transition.[iv] The has led to a total and complete failure of the MTPTC to manage and maintain electrical infrastructure, much less advance and improve it.
  • Diffusion of responsibilities: There is no institution or agency tasked with regulating the energy sector in Haiti, ironically creating a problem that government, by its very definition, is supposed to solve. In theory, the MTPTC and BME both work with the state-owned electric utility Electricité d’Haïti (EDH) to advance national energy policy, but progress has been excruciatingly slow due to the fragmentation of responsibility. MTPTC, BME, and EDH have been working on a national energy policy since 2006, releasing a draft in 2012, but have not yet implemented any of it.[v]



Because the Haitian economy is so under-developed, with a GDP (PPP) per capita of only $1,800, real GDP growth rates of less than 2%, an unemployment rate of over 40%, and a fiscal budget deficit, nearly everyone in Haiti uses wood or charcoal for lighting and cooking. Those that are well off will instead use diesel generators as they cannot rely on a constant source of electricity from EDH.[vi]

  • Public health: 77% of energy usage in Haiti comes from the use of wood and charcoal for primary energy use, while accounting for 93% of the fuel used for household cooking.[vii] Like many other developing nations that use similar fuels, this has serious public health implications, as these fuels are often used indoors and emit harmful respirable toxins.
  • Environmental impact: The reliance on wood and charcoal fuels has resulted in a tremendous demand for timber in Haiti. Due to the relative size of the population to the country’s land area, this demand has caused complete deforestation of the entire country. Deforestation at this scale also creates a cascade of other problems, e.g. the displacement of topsoil from higher elevations to waterways which reduces agricultural output and has been documented to reduce certain river flows (and thereby impacting drinking water sources) by 80%.[viii] This is an unsustainable practice, and at this point, it will take decades to restore the environmental damage that has already been inflicted on the Haitian landscape.



            Lastly, perhaps Haiti’s most obviously apparent problem in the energy sector is the widespread theft of electricity, with an estimated 54% of all electricity produced being stolen or lost in transmission. The root causes that create the conditions for rampant theft are EDH’s inability to bill and collect payments, an unregulated electrical infrastructure that has been cobbled together and not maintained, and a lack of commercial customers that shifts the revenue burden to those that are poorest.

  • Billing and collections: EDH, consistently suffering from the institutional capacity problems that also afflict the wider Haitian government, has shown a consistent inability to implement efficient billing and collection practices. Underscoring inefficient bureaucratic procedures are a lack of electricity meters that would otherwise determine how much to bill each customer. A recent USAID project installed proper connections and meters to over 8,000 households, and found that collection rates improved from 25% to above 90%.[ix] A central principle to collecting payment is first ensuring that the customer and utility can agree on the level of service consumption.
  • Poor infrastructure: The electrical infrastructure of Haiti is severely under-maintained. This is most easily attributed to a lack of financial resources, a lack of technical know-how, and a complete lack of regulation of the infrastructure itself. Because the energy grid is so frequently down (with most customers only receiving around ten hours of electricity per day[x]), this provides ample time for individuals to make illegal connections to distribution systems, as a majority of the time there is no associated danger. Moreover, a dilapidated grid that provides such poor service has the effect of creating a culture of non-payment, as consumers don’t feel like they are receiving a quality of service that would justify compensation.
  • Inflated prices: As previously mentioned, because businesses and organizations cannot rely on EDH to provide electricity around-the-clock, these organizations turn to diesel generators for their energy needs. The effect that this has on the economics of the public utility are far more impactful than one may initially assume. These firms are the would-be customers that would be best positioned to actually pay for the electricity service, but in their absence, the burden to pay falls on those with the least amount of resources. This means that the final cost of electricity is shaped by a customer base in which many aren’t paying, incentivizing EDH to raise their prices in an attempt to recoup the lost revenue from customers that are In sum, this artificially increases electricity prices, leading to a situation where electricity costs as much as $0.34/kWh in Haiti; far more than nearly every other country on Earth, and about triple the costs that one would encounter in the contiguous United States.[xi] This further incentivizes theft of electricity.



It is well documented that widespread, reliable access to electricity is a key for economic growth, thus, if the nation of Haiti ever wishes to become a legitimate player in the global economy, it must first solve this fundamental problem of electricity generation, distribution, and access. It is concluded here that the government of Haiti has failed in its totality for decades to build the institutional capacity needed in order to accomplish these goals, and the spillover effects include but are not limited to a failure to produce any national energy policy or decide who is to regulate the sector, widespread reliance on charcoal and generators for cooking and lighting that have detrimental public health implications, deforestation of the entire country, artificially and insurmountably high electricity prices for consumers, an exacerbated fiscal deficit due to EDH subsidies to recoup lost revenue, and the extremely prevalent theft of electricity. These findings show that, despite the efforts of numerous countries to aid the people of Haiti in ways that increase the resilience and efficiency of their electrical grid, the ultimate responsibility to advance the cause of the Haitian people falls squarely on their own government.



[i]    The World Bank. Access to electricity (% of population). 2012. 1 March 2017.


[ii]   USAID. Haiti Energy Fact Sheet – January 2016. Fact Sheet. Washington: USAID, 2016.

[iii]  The World Bank. World Development Indicators: Power and Communications. 2014. 27

February 2017. <;.

[iv]   The World Bank. Project Information Document: Haiti Electricity Loss Reduction Project.

Report. Washington: The World Bank, 2006.

[v]   Energy Transition Initiative. Energy Snapshot: Haiti. Report. U.S. Department of Energy. Washington: U.S. Department of Energy, 2015.

[vi]   U.S. Central Intelligence Agency. The World Factbook: Haiti. 2017. 2 March 2017.


[vii]   Worldwatch Institute. Haiti Sustainable Energy Roadmap. Report. Washington: Worldwatch

Institute, 2014.

[viii]   McClintock, Nathan. Agroforestry and Sustainable Resource Conservation in Haiti. Case

Study. North Carolina State University. Raleigh: NC State, 2004.

[ix]   USAID. Haiti Energy Fact Sheet – January 2016.

[x]   Worldwatch Institute. Haiti Sustainable Energy Roadmap.

[xi]   Friedman, Lisa. Can Haiti Chart a Better Energy Future? 17 April 2013. 3 March 2017.





The history of petroleum production since the nineteenth century is largely characterized by the technology that was used to extract it from the Earth. Throughout this history, there have been many paradigm shifts in which a new technological regime dramatically improved the efficiency of petroleum extraction, transport, refining, and use. One of the most recent of these paradigm shifts is the development of rotary steerable systems (RSS), which allow drilling operators to control the direction of the drill bit while the drill is still in operation, hence, while the drill is under load and underground. Though the task seems rudimentary, the technology required to achieve this degree of remote control is incredibly complex – among some of the most advanced technology that mankind has developed – including a highly sophisticated system of electronics that must operate reliably miles underground in some of the harshest conditions imaginable. These developments have been a critical component of the shale revolution, and is one of the only reasons that the modern fracking industry exists. We will explore what advances made these systems possible, and why a task as simple as drilling sideways isn’t as easy as it seems.



            The first engine-drilled commercial oil wells were established in the 1850s and 1860s in Canada and the United States, but it took many decades before producers became conscious of the utility of drilling wells at an angle that was not necessarily vertical.[i] The first widely known instance of a drilling rig intentionally drilling at an angle to achieve a specific objective occurred in 1934, when John Eastman and Roman Hines were able to drill slanted relief wells into an oilfield to reduce the pressure; an instrumental feat in reining in a well which had blown out. Eastman and Hines were featured in an issue Popular Science in May of that same year, where the magazine details Eastman’s primitive but effective surveying tool, “Into the hole went a single-shot surveying instrument of Eastman’s own inventions. As it hit bottom, a miniature camera within the instrument clicked, photographing the position of compass needle and a spirit-level bubble.” [ii] Indeed, the ability to “survey” and track the location, inclination (the difference in well angle from vertical), and azimuth (the compass direction) of the drill head is just as critical as having the hardware to drill in a given direction. Because micro-electronics had not yet been invented, early efforts relied on northing more than crude instrumentation such as Eastman’s camera.

The first piece of technology that truly laid the foundation for directional drilling was the development of what is today known as a “mud motor,” which is a type of progressive cavity positive displacement pump (PCPD) that is generally located immediately behind the drill bit in the drill string. Obviously the mud motors of today look vastly different from early versions, but they can trace their lineage all the way back to two patents, the first of which is entitled Drills for Boring Artesian Wells, filed by C. G. Cross in 1873, and the second of which is entitled Machine for Operating Drills, filed by M. C. and C. E. Baker in 1884.


The first piece of technology enabling some measure of directional drilling

Source: Cross, C. G. Drills for Boring Artesian Wells. United States of America,

Patent 142992. 23 September 1873.

The issue that both of these designs were attempting to address was the collapse or breakage of the drill string in rotary and reciprocating drilling operations, respectively. Because early steel was of such poor quality, rotating the entire drill string assembly from top to bottom induced these problems. Shallow wells could be drilled without issue, but the torsional stresses on the drill string only increased as the well got deeper due to the increasing mass of the string and the torsional flex that was inherent to the pipe or rods being used. Both Cross and the Bakers developed mechanisms in which the drill string could remain stationary while the drill bit turned, by pressurizing the drill string with fluids such as water, steam, or air, then using this pressure to drive a very rudimentary motor, i.e. the drill bit.[iii] [iv] By solving this problem, drill operators naturally discovered that they could use a section of slightly curved pipe in the drill string to influence the direction the drill, so long as the string remained stationary while the bit turned. They could install one length of curved pipe, lower the assembly to the “kickoff point” (the point in which a well begins to deviate from vertical), drill until they had reached their desired change in well inclination and/or azimuth, then raise the assembly and swap the curved pipe back to straight.

As earlier mentioned, the ability to track the location of the drill head is of the utmost importance in trying to steer a drill into a pocket of oil or gas. Early efforts in “measurement while drilling” (MWD) consisted of little more than pendulums to determine the inclination of the well, and compasses to determine the azimuth. However, pendulums were ineffective for deeper wells, and compasses often had issues when used inside of well casing as the steel interfered with the Earth’s magnetic field. This ushered in the era of micro-gyroscopes in the early 20th century, which are still used to this day.



            The difference between RSS and conventional systems begins at the kick-off point in the well. The systems used to steer the bit vary significantly among manufacturers, but there are two primary platforms that all manufacturers have built their systems upon: “push-the-bit” and “point-the-bit” designs. Push-the-bit systems rely on a small array of pads around the circumference of a sleeve located just behind the bit. This system uses internal hydraulic pressure to selectively actuate the pads outward to push against the side wall of the well; applying pressure to one side causes the bit to curve in the direction opposite of the pad(s) being utilized. By doing this, the tool is able to influence the direction of the bit, steering it while in operation.


A “push-the-bit” system. The pads are actuated in order to “push” the bit in the opposite direction.

Point-the-bit systems work in a very different manner. Their ability to steer the bit is achieved by using a variety of components to push on the drive shaft, creating a deflection in the bit by way of a fulcrum between the control unit and the bit. Companies have developed very unique and clever ways to create this bit deflection. For example, the company Weatherford uses a radial array of sixty-six electronically triggered and hydraulically actuated pistons to flex the shaft in the desired direction.[v] Halliburton uses a pair of nested eccentric rings through which the drive shaft turns. These rings are independently rotated to create the desired bit deflection.[vi]


Weatherford’s “point-the-bit” RSS


Halliburton’s “point-the-bit” RSS


Common to both of these systems is the need to have the control unit remain stationary or “upright,” whether the drill string is rotating or not, else, it would be impossible for the control systems to have any frame of reference from which to command the direction of travel. Many systems use blade-like devices mounted radially on the outside of the control unit, much like the aforementioned pads in push-the-bit systems, also akin to the fletching of an arrow. These devices, coupled with high-performance bearings on both the front and the rear of the control unit, create enough friction between themselves and the sidewalls of the well to prevent rotation of the tool. This is essential to maintain full control of the well’s direction.

Measurement while drilling methods have also advanced substantially from the early days of directional drilling. No longer is it necessary to use pendulums or down-hole cameras to determine the progression of the well. The first major advancement came with the development of what is called “mud pulse telemetry,” which is not in and of itself a tool to pinpoint the location of the drill head, but instead a method of data transmission. Because these drills, and by extension the entire drill string, are continuously pressurized with drilling fluid also known as “mud,” a clever innovator discovered that one could take this baseline pressure and modulate it over time. To create this variance in pressure, a system of electronics rapidly actuates a valve on the drilling platform to send data to the drill underground, and actuate a valve in the drill to send data back to the platform. These variances in mud pressure are essentially encoding the drilling fluid with information about the location, depth, inclination, and azimuth of the drill. Pressure differences are received as analog signals (which are then modulated into digital data) in real time by both a device coupled to the drill string on the rig, as well as a device within the control unit adjacent to the drilling head underground. Mud pulse telemetry is the most commonly used method of data transmission in drilling operations, as it is highly reliable and usually fast enough for most wells. Though current technology can reach bandwidth speeds of up to 40 bits/s, significant signal attenuation occurs with more depth, and these systems often transmit data at speeds far less than 10 bits/s. For highly irregular wells at great depth, low bandwidth can create a serious amount of downtime for the rig while crews wait for information to be exchanged, thus other methods may be employed.[vii]

Electromagnetic (EM) telemetry has emerged more recently as a system far superior to mud pulse telemetry in certain situations. EM systems utilize a voltage difference between the drill string and a ground rod driven into the Earth some distance away from the well. Though this system can transmit data much faster than mud pulse telemetry for shallow wells and wells that are drilled using air as opposed to a liquid, the electromagnetic signal attenuates very rapidly with well depth. Within the last decade, some companies have also developed drilling pipe that incorporates a wire into the pipe wall. This wire is connected from stick to stick of pipe, and can offer data transfer speeds orders of magnitude faster than either of the previously mentioned methods; over 1,000,000 bits/s. Using a system like this requires drill operators to be much more attentive to the process of building the drill string, ensuring that each connection is resilient enough to withstand the harsh environment it will operate in down-hole. This is the future of MWD, but until this technology becomes common among drillers, manufacturers of these components will be unable to realize the economies of scale that come with mass production, and that ultimately make these components cheaper and more reliable to use.[viii]

This collection of technologies has enabled drilling operators to reach distances that their predecessors likely couldn’t even comprehend. It is not uncommon for a horizontal drilling operation to achieve lateral distances of over one mile, though some operations have done much more. In 2016, Halliburton and Eclipse Resources drilled the longest horizontal well in the U.S., exceeding 18,500 feet drilled horizontally.[ix] This still doesn’t come close to beating the record set by Maersk Oil in 2008, when they completed an offshore well in Qatar that had a horizontal section of 35,750 feet in length. Highlighting the precision of this technology, all seven miles of this Qatari well were drilled through a reservoir target only twenty feet in thickness under the sea floor.[x]



It is only with highly advanced rotary steerable systems and measurement while drilling methods that operators are able to achieve these feats of nature. Directional drilling offers several fundamental advantages over conventional drilling techniques, the first and perhaps most obvious of which is the ability to drill more wells from the same platform or rig. Whereas in the past drillers had to construct many wellheads in close proximity to each other to increase the rate of resource extraction from a single reservoir, directional drilling allows operators to use a single wellhead to access dozens of different wells. This has had a monumental effect on increasing the efficiency of resource recovery operations, introducing an economies of scale into the process that dramatically reduces the cost of infrastructure and reduces the amount of time labor spends shifting from location to location. This has the effect of permanently increasing the productivity of assets under management by several fold.

Directional drilling also eliminates topographical constraints with respect to where the drilling rig and wellhead are located. This means that drillers are no longer forced to haul materials off-road, over mountains, or through rivers to locate the rig above a prospective reservoir. Far fewer roads and bridges have to be constructed to make the rig regularly accessible. One can imagine a multitude of circumstances in which it could be immensely troublesome to clear forests, fill in streams, or level the side of a mountain just to create a good working surface for drillers, but with these systems, drillers can locate the wellhead wherever it is easiest. In this way, this method of resource extraction drastically reduces the amount of environmental destruction that occurs in this industry. Additionally, resources located under small bodies of water can be extracted from shore, and resources located under cities or populated areas can be safely extracted from a safe distance.

In comparison to the directional drilling technology of the mid twentieth century, rotary steerable systems eliminate the downtime drillers were previously subjected to when they had only curved pieces of pipe to use in order to influence the well direction. Drillers pulled the entire drill string out of the well to insert a single piece of curved pipe in order to drill a dog-leg angle, but would then have to pull the drill string out again to convert the drill back to its previous state to continue in a straight line once they had reached the desired inclination and/or azimuth. Disassembling the drill string is a time-consuming process, thus, being able to control the drill while in operation eliminated all of this downtime, shaving days or weeks off of the project. This further reduces operational costs to the company, and by extension, oil and gasoline prices to consumers.

Directional drilling has also been a critical component to the growth in market share of shale oil and gas, and had this technology not been developed, the shale revolution would have not been possible. Conventional fossil fuel resources have historically been extracted from homogeneous formations whereby companies only needed to pierce the formation and let the immense pressure deliver the resource to the surface. With respect to shale-derived gas, horizontal drilling is necessary as the resource exists trapped within a non-fluid substrate; thus, one must impart a destructive force on the substrate itself to allow the resource to flow. This means that the development of a well requires drillers to have a presence in as much of the formation as possible to maximize the total volumetric space that they can fracture. Due to way in which fossil fuels were formed, reservoirs are typically very thin relative to the overall area that they occupy, which means that drillers must be able to steer their tools horizontally so as to maximize the surface area of the well that is in contact with the resource. In practice, this means that gas extraction from shale involves a great deal more horizontal drilling than vertical. Moreover, individual shale formations can rise and fall with the topography of the landscape they exist under, necessitating precision guidance of the drill to stay within the confines of the formation. In short, extracting shale resources would not be economically feasible without directional drilling.


As one can see, modern directional drilling and RSS technology required nearly two centuries of incremental innovation to make the technology what it is today. In many ways, this technology is a testament to the persistence of mankind to improve the efficiency of business operations, never relenting in their pursuit to reduce costs and maximize profits. Innovators will stop at nothing to create new ways to more efficiently extract, transport, refine, and use our natural resources, making the resulting commodities cheaper and more accessible to the disadvantaged communities of the world. Directional drilling has unlocked reservoirs of fossil fuels that only a decade ago were thought to be far too costly to extract, but this calculus has undergone a complete transformation given the advanced systems available today. Given how formative rotary steerable systems were for the past two decades of oil and gas extraction, one can only imagine what is in store for the industry in the future.



[i] Oil Museum of Canada. Oil Springs. 21 November 2015. 10 February 2017. <;.

[ii] American Oil & Gas Historical Society. Technology and the Conroe Crater. 2017. 9 February  2017. <;.

[iii] Baker, M. C. and C. E. Baker. Machine for Operating Drills. United States of America: Patent 292888. 5 February 1884.

[iv] Cross, C. G. Drills for Boring Artesian Wells. United States of America: Patent 142992. 23 September 1873.

[v] Weatherford. Revolution High-Dogleg. 2017. 19 February 2017. <;.

[vi] Halliburton. SOLAR Geo-Pilot XL Rotary Steerable System. 2017. 7 February 2017. <;.

[vii] Wassermann, Ingolf, et al. “Mud-pulse telemetry sees step-change improvement with oscillating shear valves.” Oil and Gas Journal 106.24 (2008)

[viii] National Oilwell Varco. IntelliServ. 2017. 11 February 2017. <;.

[ix] World Oil. Halliburton, Eclipse Resources complete longest lateral well in U.S. 31 May 2016. 8 February 2017. <;.

[x] Gulf Times. Maersk drills longest well at Al Shaheen. 21 May 2008. 14 February 2017. <;.



Air pollution is a problem that every developed nation has had to deal with at some point in their history, albeit some more than others. As economic development progresses, educational standards are achieved, and the standard of living is raised for the population at large, citizens will begin to demand cleaner air and environmental protection. For both India and Mexico, a legislative push to protect the environment emerged in the 1980’s to early 1990’s. Mexico City was named the most polluted city in the world in 1992, but while Mexico was cleaning up, Delhi went on to achieve this same title in 2013. Why is it that two federal democracies, whose efforts to curtail air pollution have both been significant and began at roughly the same time, have had such drastically different outcomes with respect to environmental regulation? It is hypothesized that these different outcomes are primarily the result of critical institutional, legal, and socioeconomic differences between Mexico and India. Each country’s capital city will be used as a proxy for this comparison.


The United Nations named Mexico City the most polluted city in the world in 1992, bringing international focus to the horrible environmental conditions within the country (United Nations Environment Programme 1992). The city, topographically circumscribed within a valley, had such terrible air quality that birds would drop dead in flight. Smog was so intense that, from within the city, you couldn’t see the mountains around it. Extreme concentrations of particulate matter decreased the life expectancy of its inhabitants and posed a severe public health risk. At the local and state levels, ecological concerns had consistently taken a backseat to the rapid economic growth and industrialization that Mexico experienced for the decades leading up to the United Nations report. For the federal government however, shaping governmental institutions to adequately address these problems had begun two decades prior, but it was only until this institutional capacity reached a critical mass in the early 1990’s that substantial progress could finally be made.

For about thirty years, Mexico went through a period of institutional shuffle as environmental issues became more and more important to the people. This began in 1972 with the creation of the “Secretariat for Environmental Improvement” under the umbrella of the Ministry of Health and Welfare. In 1982, the Federal Environmental Protection Act created the Ministry of Urban Development and Ecology (SEDESOL). These developments were followed by the creation of the National Water Commission in 1989, the National Institute of Ecology (INE), and the Federal Attorney for Environmental Protection (PROFEPA), which were then consolidated into the Ministry of Environmental, Natural Resources, and Fisheries (SEMARNAP) in 1994. In 2000, the fisheries subsector was moved into the Secretariat of Agriculture and the institution was once again restructured, leading to the Secretariat of Environment and Natural Resources (SEMARNAT) that we know today (SEMARNAT 2013). The creation and dissolution of this myriad of governmental agencies may seem overwhelmingly complicated and inefficient today, but this reflects a history of commitment to environmental preservation. The federal government of Mexico has constantly refined and streamlined the institutional structure of SEMARNAT and its predecessors to ensure responsiveness in the face of growing ecological concerns, and to ensure that the institution possessed an effective set of tools to confront these problems before they grew so large that they could not be managed.

The environmental regulatory framework has been largely defined by the General Law of Ecological Balance and Environmental Protection (LGEEPA) of 1988 (SEMARNAT 2014). This piece of legislation clearly defined the role of all three levels of government in regulating air pollution, and laid the basis from which the PROAIRE program could derive its resounding success over the following two decades. LGEEPA dictates that the federal government is responsible for “formulating and conducting national environmental policy,” with this power legally exercised by the president through SEMARNAT. There are other federal responsibilities under this law, such as issuing environmental standards and defining reporting mechanisms, but the role of the federal government that is outlined is limited to primarily management of state and local entities. The law also gives individual states, in addition to the federal government, the power to design and implement their own “economic instruments” to encourage and discourage certain environmentally relevant activities and to “promote greater social equity in the distribution of costs and benefits associated with the objectives of environmental policy.” It is the role of the states and of municipalities to develop emission reduction plans in their respective jurisdictions, and to then submit these to SEMARNAT for review and approval (Ley General Del Equilibrio Ecológico Y La Protección Al Ambiente 1988). In practice, the state and municipal authorities have engaged the “academic, private, and non-governmental sectors of each city,” so as to formulate policy that works for all parties involved (SEMARNAT 2014). The LGEEPA structure provides accountability, intuitive delegation of authority, and a pathway for escalation that gives the state strength in enforcing these regulations.

            The PROAIRE program that was implemented through the original LGEEPA legislation gave states and municipalities a significant amount of breathing room to develop their own plans and to determine what would work best in their locales. Mexico City took a particularly aggressive approach to combat dangerous levels of pollution within the city, and this has resulted in substantial reductions of dangerous pollution as well as a large reduction in CO2 emissions. From 1989 to 2015 SEMARNAT found that the PROAIRE program reduced PM10 (particulate matter less than ten microns in diameter) from 175μg/m3 to 40μg/m3, airborne lead from 1.4μg/m3 to 0.05μg/m3, and carbon monoxide from 7ppb to 1ppb. Monitoring of PM2.5 began in 2004, and since has been reduced from 25μg/m3 to 20μg/m3 (Secretaría del Medio Ambiente 2015). These emission reductions are despite a 9.3% increase in the federal district’s population during this time. The city was able to use the legal and institutional framework to, among other things, force power plants to move outside of the densely populated areas, institute a “no-drive-day” once per week, create a bike-sharing program, and convert the entire city bus fleet to natural gas. In addition to this, and as mandated by the LGEEPA legislation, the city has continually devoted a significant sum of resources to a public information campaign (C40 2013). This focus on education will continue to foster a public sentiment that values a clean environment and results in the citizens doing their part as well.


In the same United Nations report that named Mexico City the most polluted city in the world in 1992, Delhi was considered to be at risk of further deteriorating air quality. Despite the extremely limited atmospheric monitoring infrastructure, upward trends had begun to expose themselves, providing a glimpse into a dismal future. These trends were driven by “increasing motor vehicle numbers” and a “rapid rate of industrial expansion.” The report concludes “epidemiological data are urgently needed,” referring to a “high incidence of tuberculosis” that had been linked to pollutants, signifying that by the early 1990’s air pollutants had become a major public health concern (United Nations Environment Programme 1992). Like Mexico, though, the federal government had begun to take action to two decades prior, passing legislation to create regulations that would presumably curtail these problems.

In contrast to Mexico, India has involved itself very little of this “institutional shuffle,” and has addressed the pollution problem mostly through the creation of new environmental law. The environmental conscious of India was sufficiently pricked with respect to environmental preservation with the passage of the Stockholm Declaration at the international level in 1972, of which India was a signatory. This prompted parliament to add two articles to the constitution, saddling the government with a mandate to “protect and improve the environment.” This was followed by the passage of the Water Act of 1974, the Water Cess Act of 1977, the Forest Conservation Act of 1980, the Air Act of 1981, the Environment Act of 1986, the National Environment Tribunal Act of 1995, and several others. Of these, it is perhaps the Water Act of 1974 that was most institutionally significant, as it was responsible for the creation of the Central and State Pollution Control Boards; the first governmental agency that had the responsibility to enforce environmental legislation (Agarwal 2005). Even after the creation of the Department of Environment in 1980 and its later transformation to the Ministry of Environment and Forests in 1985, the pollution control boards have seemed to remain the de facto institutions to identify and reprimand offenders (Indian Institute of Science 2014). The hierarchical structure, as well as the responsibilities, of these boards has largely remained the same since their creation decades ago.

Much attention to air pollution in India over the past two decades has, perhaps strangely, come from the judiciary. The Supreme Court has frequently, in times of regulatory or executive inaction, taken on a very active role to institute entirely new environmental regulations in the form of what are called “Supreme Court Action Plans” (SCAPs) beginning in 1996. What makes the history of these SCAPs slightly peculiar lies in the ability of the Supreme Court to bring a public interest suit against the state itself; i.e. the court is granted automatic standing under Indian environmental law (Greenstone and Hanna, Environmental Regulations, Air and Water Pollution, and Infant Mortality in India 2013). When the government has caved to pressure from industry or the public, the Court has often stepped in to force the government to follow through with environmental plans they had already indicated would be implemented. The Supreme Court has acted as a bulwark against this adverse political influence from industry, and its actions have been particularly effective (Narain and Bell 2005).

            Historical air pollution trends are uniquely difficult to quantify for India, due to the lack of data. The CPCB first instituted the National Air Monitoring Programme (NAMP) in 1984, but because resources have been so limited for this program, the proliferation of an adequate number of monitoring stations has been incredibly slow (Central Pollution Control Board 2003). There are still an insufficient number of monitoring stations in 2015, and the lack of regular processing and reporting of these figures produces very low-resolution trends. Technology is not installed to differentiate between PM2.5 and PM10 at many of the sites, a necessity to accurately study air pollution and the public health effects of it. It is known that PM2.5 and PM10 concentrations have continued to increase in Delhi over the past several decades, driven by a rapidly increasing population and increasing numbers of personal vehicles, and currently sit at levels that constitute a major health risk. The most recent data puts Delhi’s PM10 concentration at 286μg/m3 in 2013, up from 148μg/m3 in 2004. PM2.5 concentration stood at 153μg/m3 in 2013, the highest of any major city in the world, up from approximately 135μg/m3 in 2010 (World Health Organization 2015). These significant concentrations of respirable particulate matter have been linked to a decrease in life expectancy for Delhi residents of approximately 3.2 years (Greenstone, Nilekani, et al. 2015).

India’s greatest success thus far has been the reduction in lead and carbon monoxide to levels that are negligible, mostly due to the implementation of the Bharat vehicular emissions standards and the mandated use of catalytic converters on personal vehicles. The implementation of these policies is attributable to the Supreme Court, after the Ministry of Environment and Forests reneged on their prior commitment (Narain and Bell 2005). India has undoubtedly made progress in curtailing some pollutants throughout the country, but the problems are growing faster than their capacity to mitigate them.


            One major contributing factor to the difference in performance between environmental regulation in India and Mexico is the fundamentally different institutional structure of the regulatory agencies. The Ministry of Environment and Forests of India is a ministry that has no presence in the Council of Ministers, as the head of this department is a “Minister of State” as opposed to a “Minister.” This means that the head of the Ministry of Environment and Forests cannot take part in cabinet meetings and has no cabinet minister in the union government that oversees him or her. Though the ministry itself possesses executive authority, they do not answer to a president, and instead ultimately answer to the Indian parliament. SEMARNAT of Mexico is a department within the executive cabinet, with the secretary being appointed by the president himself. The Secretary of Environment answers to the president of Mexico, and in practice is typically allowed to engage in more independent policy making without excessive fear of political blowback, especially in the case of a lame duck president. Because the Indian Ministry of Environment and Forests is subservient to parliament, and parliament would presumably be more fearful of political backlash as a result of harsher environmental regulation, we should not expect this ministry to engage in as much rule or policy making, even despite its executive power.

Lower levels of governance, when left to their own devices, are less likely to institute ambitious environmental protection regulations when compared to federal-level agencies for fear of political scrutiny, due to their proximity to the community they will affect. This problem is exacerbated in India, where the state pollution control boards are comprised exclusively of individuals that have been nominated and confirmed within the state legislature. This naturally produces boards that will have more difficulty remaining objective, as they are presumably subservient to the legislature that put them in their positions. This diminishes their sense of independence and to some degree their autonomy, making them particularly vulnerable to powerful political pressures from industry. The members of these boards do serve three-year terms, but are eligible for re-nomination in perpetuity (The Air Prevention and Control of Pollution Act 1981). In this context, the organization of the pollution control boards create an incentive structure that is more akin to a political position rather than a administrative-bureaucratic position. This is very different from the environmental regulatory agencies that exist at the state level in Mexico. Mexican state governments are structured very similarly to that of the federal government, meaning that each state has its own environmental secretariat, with the secretary being appointed by the governor and the rest of the organization being highly bureaucratic (Constitution of Mexico Article 116 1917). The resulting institution is one that is highly impersonal compared to India’s numerous pollution control boards; a desirable trait for regulatory agencies in particular. In the disciplinary field of comparative politics, it is accepted that the degree to which institutions are impersonal have a positive relationship with the resulting strength of the state unto which it is a part (North, Wallis and Weingast 2009). Insofar as this is true, this would imply that Mexico, at least at the state level, possesses more institutional capacity, and a higher propensity to enforce regulations that are in place.

            Confrontation of environmental issues in India seems to lack a congruous and unified approach among jurisdictions. It is indeed the duty of the Central Pollution Control Board to “plan and execute a nationwide program for the prevention, control, or abatement of air pollution,” and “coordinate the activities of the States,” but much of what has seemed to occur at the national level is simply setting concentration standards for pollutants and instructing the states to ensure they are met (The Air Prevention and Control of Pollution Act 1981). The environmental secretariat of Mexico instituted the first nationwide program, PIICA, in 1990. Following this, it instituted the aforementioned PROAIRE program in 1996 and has renewed this program twice since (Álvarez, Lara and Moreno 2009). Programs like PIICA and PROAIRE serve to do three things. First, by synthesizing all relevant environmental regulation and pollution standards, they present clearly defined goals through a standalone doctrine. Secondly, they have the ability to capitalize off of nationalist sentiment to create the aforementioned unified and consistent approach across jurisdictional lines. Lastly, these programs raise awareness throughout the bureaucracy and the populace, helping to influence public sentiment and make citizens more sympathetic to the overall cause. India has no such comprehensive nationwide program, opting instead to pass over 200 different pieces of environmental legislation but never integrating these regulations into a program that the country can rally around to spur rapid progress (Agarwal 2005). It stands to reason that this is due to a lack of capacity, resources, or political capital.

            A substantial amount of variability in environmental outcomes is certainly attributable to the wealth disparity between Mexico and India. In 1990, which is approximately when pollution mitigation became a national focus in both countries, GDP per capita in Mexico was $3,068 (2015 dollars), whereas GDP per capita in India was $375. In 2014, the GDP per capita of Mexico was $10,230, with India growing to $1,595 (The World Bank 2015). On a percentage basis, India has vastly outperformed Mexico and most other countries in real GDP growth over the past twenty-five years, but in per capita terms India still falls quite short. Given their comparative lack of wealth, this would indicate that India simply has different priorities than a country like Mexico. There are still large numbers of impoverished people in India, some without enough food to eat or access to electricity. From a governmental humanitarian standpoint, it would make little sense to direct any meaningful amount of resources to advanced air pollutant reduction technologies, and with regard to air pollution specifically, there is little doubt that clean water programs would take priority anyways. From a civil society standpoint, the people of India would presumably be naturally drawn to pool their resources and time to address the shortages of these basic human needs rather than the curtailment of air pollution.

Even considering all of the legislative efforts that India has made to clean up pollutants and mitigate the emission of them, the country has wrangled for years with the lackluster enforcement of existing environmental law. Take, for instance, an experiment that was conducted several years back in the Gujarat state. Pollution control boards are legally required to carry out inspections of industry to ensure they are not in violation of any environmental regulations, but because of a lack of resources the board had instead been requiring that firms hire their own inspectors. The effects of this conflict of interest were exposed with this experiment, when the board employed inspectors themselves. As a result, it was found that emissions had been systematically understated in the past, inspectors began reporting truthfully, and firms began reducing their emissions of pollutants (Duflo, et al. 2013). This study exposed in real terms the cost of an insufficient amount of institutional capacity.

Legally, the pollution control boards possess the power to completely shut down firms if they are in violation of environmental law, or even have firm managers arrested and charged criminally, but these laws are rarely enforced. There is a possibility that pollution control board members feel a strong connection to their community, and do not want to be the ones responsible for putting so many people out of work in a country that has a significant amount of unemployed and low wages to boot. Because advanced social welfare programs are so limited, this is a serious humanitarian and political risk that few would ever be willing to take. Furthermore, in the context of foreign investment, harsh environmental regulations or enforcement thereof would only serve to deter multinational corporations from locating in India. It is clear that India has attempted to establish a very broad scope of activities that the government is to be involved in, but lacks the institutional strength to adequately enforce these regulations. India may be better suited to pursue a more modest regulatory approach to satisfy the balance between state scope and strength in a more sufficient manner.


            India and Mexico are two countries that share a fair amount of similarities when viewing them through a wide lens. However, upon further examination regarding the dramatically different environmental outcomes, one can begin to hone in on numerous institutional and socioeconomic differences that may explain Mexico’s relative success and India’s struggle. Strong bureaucratic regulatory agencies in Mexico have utilized their executive authority to implement sweeping national programs, and are better suited and more likely to enforce existing regulations due to the political independence of the institutions. Most importantly, the relative wealth that Mexico possesses compared to India better positions Mexico in their fight against air pollution; allowing them to direct more resources at the infrastructure they need, public information and awareness campaigns, and enforcement activities. Though both countries have made laudable efforts to curtail air pollution, it was Mexico that possessed a sufficient amount of state capacity to make genuine progress, and it was India that “spread its butter too thin.” Like many cases in the field of comparative politics, the reasons for the difference in outcomes here is nuanced, and consists of both governmental and socioeconomic factors.


Álvarez, Violeta, José Lara, and Adolfo Moreno. Evaluación Y Seguimiento Del Programa Para Mejorar La Calidad Del Aire En La Zona Metropolitana Del Valle De México 2002-2010 . Evaluation, Universidad Autonoma Metropolitana-Azcapotzalco, Mexico City: SEMARNAT, 2009, 15-16.

Agarwal, V. K. “Environmental Laws in India: Challenges for Enforcement.” Bulletin of the National Institute of Ecology 15 (2005): 227-238.

C40. Mexico City: ProAire. 2013. (accessed November 25, 2015).

Central Pollution Control Board. Guidelines for Ambient Air Quality Monitoring. Report, Ministry of Environment & Forests, Delhi: CPCB, 2003, 13.

Constitution of Mexico Article 116. (1917).

Duflo, E, M Greenstone, R Pande, and N Ryan. “Truth-telling by Third-party Auditors and the Response of Polluting Firms: Experimental Evidence from India.” Quarterly Journal of Economics 128, no. 4 (2013): 1449-1498.

Greenstone, Michael, and Rema Hanna. Environmental Regulations, Air and Water Pollution, and Infant Mortality in India . Working Paper, Department of Economics, Massachusetts Institute of Technology, Cambridge: MIT, 2013.

Greenstone, Michael, Janhavi Nilekani, Rohini Pande, Nicholas Ryan, and Anant Sudarshan. “Lower Pollution, Longer Lives: Life Expectancy Gains if India Reduced Particulate Matter Pollution.” Economic and Political Weekly, February 2015: 40-46.

Indian Institute of Science. Pursuit and Promotion of Science. Publication, Bangalore: IISc, 337.

Ley General Del Equilibrio Ecológico Y La Protección Al Ambiente. (El Congreso de los Estados Unidos Mexicanos, El Diario Oficial de la Federación 1988).

Narain, Urvashi, and Ruth Bell. Who Changed Delhi’s Air? Discussion Paper, Washington: Resources for the Future, 2005.

North, Douglass, John Wallis, and Barry Weingast. Violence and Social Orders. Cambridge: Cambridge University Press, 2009.

Secretaría del Medio Ambiente. Sistema de Monitoreo Atmosferico. December 1, 2015. (accessed December 1, 2015).

SEMARNAT. Background of SEMARNAT. December 1, 2013. (accessed November 21, 2015).

—. Management Programs to Improve Air Quality. September 26, 2014. (accessed November 23, 2015).

The Air Prevention and Control of Pollution Act. (Indian Parliament, March 29, 1981).

The World Bank. GDP per capita. 2015. (accessed November 29, 2015).

United Nations Environment Programme. Urban Air Pollution in Megacities of the World. Report, Oxford: Blackwell Publishers, 1992.

World Health Organization. Ambient Air Pollution. 2015. (accessed December 1, 2015).



            Greenhouse gas (GHG) emissions from anthropogenic sources, more specifically emissions of carbon dioxide (CO2) through energy intensive processes, are predicted to accelerate on a non-linear trend that began at the onset of the Industrial Revolution in the 19th century. The growing concentration of CO2 from the combustion of fossil fuels has begun to warm the atmosphere, and beginning only a few decades ago, society became aware of not only the presence of more GHGs in our atmosphere, but also became conscious of the implications with respect to human civilization. Several different sources of renewable energy have become popular and are marketed as substitutes for fossil fuels, such as photovoltaics and wind. However, these sources are still largely too expensive to be adopted by utilities that have traditionally provided their customers with a pricing structure rooted in cheap electricity from coal. Compared to these renewable sources, nuclear power generation has some advantages that humans will have to rely on in the coming decades while manufacturing processes are optimized to make solar, wind, and battery technologies more economically viable.


            If there is one historically unifying theme for nuclear power, it is that it seems to have been consistently under-utilized. 439 nuclear reactors are in operation today, in thirty countries, with only fourteen of those countries generating more than 20% of their overall electricity consumption with nuclear. Only three countries, Hungary, Slovakia, and France use nuclear sources to provide the majority of their electricity (IAEA 2015). This trend seems to be due to several different factors, the first of which being the relatively high capital and financing cost of constructing a new nuclear power plant, coupled with long construction time scales. The upfront capital required for a nuclear plant is higher than any other energy generation technology with the exception of offshore wind power and coal-gasification integrated combustion cycle (IGCC) with carbon capture and storage (CCS). Total overnight cost, which includes engineering, procurement, and construction costs, but also excludes interest on financing, came to $671/kW for modern combustion turbine designs that utilize coal as fuel. This is in very stark contrast to modern nuclear reactor installations that have recently averaged $5,366/kW total overnight cost in the United States (U.S. Energy Information Administration 2015). It is easy to see why utilities prefer traditional fossil fuel plants with such a substantial difference in upfront costs, but this is only a piece of the story.

            Compared with every other energy generation technologies, a new nuclear plant takes longer than any other installation to bring online without even considering regulatory or political roadblocks. While a conventional coal or gas plant could be completed in about three years from date of order, a nuclear plant is expected to take about eight years (U.S. Energy Information Administration 2015). Add to this the nearly unanimous tendency for nuclear projects to be delayed while still under construction, and you have a nuclear plant that in some cases ends up costing 75% more than initially estimated (World Nuclear News 2014). This is a tremendous investment risk for private investors, governments, or utilities, and has consistently made the estimation of plant cost wildly unpredictable. Delays on nuclear projects are so common that a project being completed on time has become the exception rather than the rule.

            Capital and financing issues are not the only problems with trying to bring new nuclear online. Beginning with the first major nuclear accident in Chernobyl, nuclear has been characterized as being unacceptably dangerous in many countries around the world. There have been countless public demonstrations against nuclear power, with every subsequent nuclear accident revitalizing public opposition and sparking mass protests throughout the world. The meltdown of three reactors at the Fukushima Daiichi power plant in 2011 is classified as the second worst nuclear accident in history and has resulted in a massive international response, despite no deaths having been attributed to the disaster at this time. Most notably, Chancellor Merkel of Germany made a commitment in the days that followed the Fukushima accident to accelerate the decommissioning of all of Germany’s nuclear reactors (Reuters 2011). It is also estimated that this disaster alone will result in only half the previously estimated new nuclear capacity by 2035 due to public concerns, though some of the major markets such as India and China are expected to proceed with earlier plans to vastly increase their share of nuclear power capacity (The Economist 2011). China has since reiterated its commitment, with a planned completion of 129 new reactors by the year 2030 citing considerable public health concerns with air quality in major population centers, as well as an increasing international focus on the mitigation of CO2 emissions (Forbes 2015). Nuclear disasters have been a regular source of waning interest in the nuclear sector for decades now, but there does seem to be momentum in the industry. Nuclear electricity generation is a very effective way to offset CO2 emissions, and the politicians know this, but the public is cautious. Just as the Ukrainian ghost town of Pripyat was contaminated with radioisotopes after the meltdown at Chernobyl, the phrase “nuclear power” has been contaminated with ideas of apocalypse and catastrophe. There must be a relentless focus on improving reactor technical safety and redundancy, as well as improving human safety protocol and inspection measures to reassure the public that this technology is overwhelmingly safe when compared to other forms of energy generation.

            The issue of nuclear waste handling has been a source of contentious debate for decades, and will likely continue well into the future. Traditionally, many countries treated all of the spent nuclear fuel as waste; despite the spent fuel containing only <5% undesirable actinides and fission products. This “spent” fuel can be reprocessed to separate the undesirable contents from the portions of uranium and plutonium that are still valuable, but because reprocessing this fuel requires relatively expensive infrastructure and new nuclear fuel is relatively cheap, it has never been in the economic interests of a company or government to build a fuel reprocessing plant. Despite this, many countries have done just that, partly to reduce fuel costs by recycling the spent fuel, but also because reprocessing makes waste disposal much easier by concentrating the dangerously radioactive waste into a smaller volume. The United States has had a very lackluster history with regard to nuclear waste disposal, choosing not to reprocess spent fuel and also to simply store it in sealed “dry-casks” above ground (Hashem 2012). This is a very inexpensive method of disposal, but many argue is unsustainable and presents unacceptable levels of risk to the public.

            If humanity is to rely on nuclear power to a greater extent in the future, the nuclear fuel cycle must be normalized among countries to create continuity, consistency, and predictability. This would involve a commitment by every country with nuclear capacity to extract the most radioactive actinides from spent fuel to reduce the volume of waste, and then permanently “freeze” this waste in an insoluble compound such as borosilicate glass or synthetic rock. This ensures that if this high level waste were to ever come into contact with water, it could not leach out and pose a health concern. Countries with nuclear capacity also need to normalize the way in which they ultimately dispose of this high level waste after reprocessing. Projects are underway in several different countries at this time to develop geological repositories up to 500m deep, well below the water table in a majority of the world (World Nuclear Association 2012). These repositories must be subject to several constraints, including but not limited to proximity to population centers, proximity to aquifers, and geological stability. This strategy provides multiple layers of protection against an accidental radiological release into the biosphere, and seems to be the best nuclear waste policy at this time. It is likely, though, that the populations of people closest to these sites will be vehement opponents of this disposal process, making it politically difficult to implement.


            There are no CO2 emissions associated with producing electricity at a nuclear power plant, and also no harmful emissions like mercury or sulfur that are produced when burning coal. An estimated 60 GW of coal capacity is estimated to be retired by 2020 in the United States with the advent of new Mercury and Air Toxics Standards (MATS) that requires “significant reductions in emissions of mercury, acid gases, and toxic metals,” and the Clean Power Plan that limits CO2 emissions by plant type (U.S. Energy Information Administration 2014). This means that it is currently advantageous to begin planning base load replacement in the form of nuclear plants. Because an average sized coal plant in the United States has a capacity of 250 MW, and an average sized nuclear plant has a capacity of just over 1 GW, the net effect to overall grid capacity would be insignificant if society were willing to invest in one nuclear plant for every four coal plants that closed. Coal provides about 30% of the United States power, and a good starting point for a long term CO2 mitigation strategy would be to aim to replace all coal power with nuclear sources, leaving the last 70% to ultimately be entirely comprised of wind, solar, and hydroelectric capacity (U.S. Energy Information Administration 2013). For every 1 GW of coal capacity replaced by nuclear capacity, on average, 4 MtCO2 is avoided per year. Hypothetically, if all coal capacity in the United States were replaced with nuclear, 1.38 GtCO2 would be avoided annually; equivalent to 4.27% of global CO2 emissions (Davis and Socolow 2014).

            There is no doubt that renewable energy sources must play a substantial role in the future, but nuclear must still be a critical component of the world’s electricity generating capacity for many years to come. There are several reasons for this, with the first being that the nature of wind and solar power is inherently intermittent. Wind power output can usually be assumed to be equivalent to the turbine operating at 100% capacity for 2,200 hours per year, and for most of the United States, solar can be assumed to generate electricity for about 2,400 hours per year (Landsberg and Pinna 1978). This is in contrast to coal, natural gas, and nuclear plants that can be assumed to operate around 8,000 hours per year, with the only downtime being for maintenance or when electricity demand falls to a point that utilities would lose money by keeping a plant operating. What this means is that there must be a great deal of innovation in the energy storage sector to store power generated by wind or solar for use at night or when the wind is not blowing, likely in the form of lithium ion batteries, or it means that we must supplement a renewable energy system with plants that we have the ability to turn off and on. Because advanced battery technology is still prohibitively expensive, this means that we must use nuclear in conjunction with wind and solar to provide a base load of power, support the grid in times of need, and to offset CO2 emissions.

            Another disadvantage that wind and solar possess is that the electricity generation potential varies wildly from region to region, even sub-nationally. In the United States, states in the northwest such as Montana and Idaho have wind potential areas that are upwards of 1000W/m2, whereas many of the southern states like Louisiana, Mississippi, Alabama, and Florida have no wind potential at all aside from possible offshore installations, and even most of those top out at a potential of only 100W/m2 (University of Montana 2008). With solar resources, states in the southwest such as California, Arizona, and Nevada have vast portions of their states that receive over 9.0kWh/m2/day in June, while states in the northeast like New York, Pennsylvania, and Massachusetts receive in the range of 4.0-5.0kWh/m2/day, making it twice as effective to generate solar power the southwest (National Renewable Energy Laboratory 2004). The implications of this are that it would be inefficient to generate power from these renewables in the “wrong” locations, and that it would make more sense to generate in energy dense locales and transmit the power elsewhere, though this would vastly increase transmission loss.      The point is that renewable energy is geographically and topographically constrained to a much more considerable degree than nuclear is. It will be necessary to locate nuclear plants in places that do not make solar or wind “sense,” with the only local requirement being some source of freshwater for cooling. This makes nuclear versatile compared to coal and natural gas as well, because coal plants are often located as close to the source of the coal itself due to transportation cost, and natural gas transportation requires expensive underground pipelines. Nuclear reactors consume a very small amount of fuel per kWh in terms of volume and weight when compared to these sources, thus alleviating many logistical concerns that the fossil fuel industry must face.


            Though nuclear is undoubtedly preferable to fossil fuels for the sake of GHG emissions, we must bear in mind that uranium is a finite natural resource. The consequence of this is that nuclear power, using current infrastructure and identified resources, is a medium-term solution for energy scarcity at this time, and cannot be considered the end-all solution to energy scarcity with current reactor technology. Globally, current identified resources of uranium in all cost categories are estimated at 13.5Mt, and at 2012 rates of uranium consumption this would provide a 120-year global supply. Undiscovered resources, estimated based off of geological data and regional geographic mapping, are estimated to amount to about 7.7Mt. Based on identified resources alone, this implies that a doubling of nuclear capacity would in principle reduce this supply from 120 years worth of uranium to 60. Fortunately, relatively higher uranium prices have resulted in more exploration for new sources. Because of this, total identified resources increased 10.8% in only the two years from 2011-2013 (OECD NEA & IAEA 2014). A trend like this will do well to mitigate any concerns utilities or governments have about a uranium shortage within the next several decades. Uranium also exists in seawater at a concentration of 0.003ppm, and could potentially be extracted if land resources became difficult enough to mine. Early research suggests that if uranium prices exceed $600/kg, it could become profitable to extract from seawater (World Nuclear Association 2015).

            One breath of fresh air with respect to potential uranium scarcity is the breeder reactor design, which operates in a fundamentally different manner than the widely used boiling water reactors (BWR), pressurized water reactors (PWR), and CANDU reactors. The advent of the nuclear era brought with it the idea that uranium was a scarce element, which directed focus to research on new reactor designs that could more efficiently extract energy from the fuel. This is the primary benefit of a breeder reactor, superior fuel economy when compared to conventional designs. Breeder reactors achieve this by not “moderating” the neutrons produced in fission, which is the role of water in nearly all reactors currently in operation. The presence of water slows the neutrons and makes them more apt to be captured by and split fissile nuclei like 235U, and also less likely to be captured by the non-fissile 238U that comprises a majority of the nuclear fuel. Without a neutron moderator, the neutrons possess a much higher energy, allowing them to more easily be captured by the non-fissile 238U, which captures two protons and converts to fissile 239Pu. Breeder reactors can also use a thorium fuel cycle, which increases efficiency even more. The takeaway is that a breeder reactor creates its own fuel that can be reprocessed into a suitable nuclear fuel on a regular basis. Where conventional reactors can extract less than one percent of the potential energy in terms of the amount uranium ore it takes to produce a viable nuclear fuel, breeder reactors can increase this “by a factor of about 60,” (World Nuclear Association 2015). This principle in conjunction with the theoretical viability of seawater uranium extraction effectively turns nuclear fuel into a source of renewable energy.

            Breeder reactors have the added benefit of addressing a large portion of the nuclear waste problem. Actinides, heavy elements that are not a product of a split atom but rather a neutron capture, are the primary source of radioactivity in traditional nuclear waste. Because of the un-moderated higher energy neutrons in a breeder reactor, these actinides become an actual part of the fuel cycle. Because of this design, a breeder reactor can theoretically burn all of these actinides and leave only lighter and less radioactive fission products. Due to geometric and physical constraints of the fuel, however, this can only be achieved with the continuous reprocessing of fuel (Bodansky 2006). The breeder reactor shows future promise, though it will likely not become popular for new installations until uranium ore is sufficiently expensive and radioactive waste storage capacity becomes overwhelmingly problematic.


            How can society encourage the mass adoption of nuclear power in countries that are not yet concerning themselves with clean energy? Currently, the world powers are reluctant to assist in bringing new nuclear capacity to developing countries for various reasons, but perhaps most of all because of national security concerns. There is an undeniable risk that a nuclear power program, no matter the specifics, could provide some degree of a framework for a terrorist group or a rogue administration to develop nuclear weapons. There are possible solutions to nuclear weapons proliferation, though nearly all would be immensely politically challenging to implement because they all require some level of oversight and capacity for verification.

            First, the world powers could mandate that new nuclear capacity in potentially problematic countries must be constructed using the Canadian Deuterium Uranium (CANDU) reactor design, which permits the usage of fuel without enrichment. This is important, as uranium enrichment infrastructure can essentially be thought of as the tool that enables the creation of a nuclear bomb. The CANDU design nullifies this because the reactor is designed to use heavy water, water that possesses a hydrogen atom with an atomic mass of two otherwise known as deuterium, to moderate neutron emission of the fuel. The heavy water itself will capture fewer neutrons and works in some respects similar to the aforementioned breeder reactor, enabling higher energy neutrons to be captured by the 238U which transitions to fissile 239Pu, resulting in the existence of overall criticality in the reactor without enriched fuel.

            However, there are still concerns with the CANDU design. There are only a handful of heavy water manufacturers in the world, and shipping large quantities could be logistically challenging; and this is without even considering its exorbitant cost of several hundred dollars per kilogram. Also, as mentioned, CANDU reactors rely on creating much of their power through the fertilization of 238U to convert it to 239Pu. Provided a reasonable fuel reprocessing facility and scientific know-how, this 239Pu can be isolated from the rest of the waste and can potentially be used to create a nuclear warhead, assuming there are large enough quantities available. Tritium is also an incidental creation in a CANDU reactor when the deuterium atom captures another neutron. Tritium can be used to create a nuclear fusion reaction that can drive a traditional fission reaction, and greatly increases the amount of energy released when arranged in the proper way. This is what is called a “two-stage nuclear,” “thermonuclear,” or “hydrogen” bomb, and is the most powerful weapon that is publicly known to have been created. This tritium can also periodically be harvested from the heavy water in the reactor (International Panel on Fissile Materials 2013). Because this waste poses a hypothetical danger in the wrong hands, the International Atomic Energy Association (IAEA) must be able to monitor the nuclear waste produced in these countries to ensure that none is being diverted to a covert processing plant with the intent of weaponizing the material.

            Lastly, the world powers could develop a global supply chain through the IAEA, whereby only carefully vetted vendors from trusted sources are allowed to enrich, manufacture, and transport nuclear fuel. This would facilitate the use of conventional and cheaper light water reactor (LWR) designs, eliminating concerns about incidental tritium production, heavy water availability, and most of the concerns with regard to the incidental production of 239Pu. In addition to total supply chain management, including nuclear waste management, there must be surveillance capacity at nearly every stage of the nuclear electricity generation process. This strategy also prevents enrichment infrastructure, which is perhaps the most important concern, but the greatest challenge with this approach is the political feasibility. A country would be required to allow a United Nations agency to essentially trample on their sovereignty by having the authority to inspect virtually any facility anywhere at any time within their borders. Requiring the countries to purchase fuel from verified vendors that are likely to exist in some other country is an additional aspect they are unlikely to find ideal. The very existence of this sort of an agreement does not particularly foster a friendly political relationship, and rests on the presumption that the new nuclear country cannot be trusted.

            Assuming the later mechanics of inspection and verification process can be achieved, there is still the question of how new nuclear projects will be developed and financed. Working through the United Nations Framework Convention on Climate Change (UNFCCC) and the Special Climate Change Fund (SCCF), new nuclear proposals from developing countries or economies in transition could be refined by experts in the nuclear industry to ensure creditworthiness. The SCCF could provide lower interest debt financing compared to what the project would alternatively receive from the private sector, or use a cooperative equity model to mitigate more of the upfront costs to the recipient utility or government. Either of these strategies would help to encourage the adoption of nuclear over fossil fuel plants in countries that are expanding their energy capacity at this time, but would also ensure a return on investment for the SCCF. Also, Russia has been quietly securing contracts with other countries under a “build-own-operate” system, in which the Russian government uses their own nuclear technology to build and permanently operate a reactor in another country. This is advantageous for both parties, but has some geopolitical implications as “Russian-built nuclear power plants in foreign countries become more akin to embassies — or even military bases — than simple bilateral infrastructure projects,” (Armstrong 2015). Regardless, without the barrier of billions of dollars of upfront capital and financing costs, it is likely that nuclear projects will look more attractive to prospective countries. This is especially true when considering that nuclear is not subject to any hypothetical carbon taxes that may ultimately be introduced at a national or international level.


            Nuclear power will face a multitude of challenges going forward, from political and public opposition, waste management, and lack of capital for project finance, but will nonetheless remain a technology we are to rely on if we wish to deviate significantly from the amount of CO2 we are emitting. Wind, solar, and hydroelectric power capacity will certainly be of the utmost importance over the next century, but their intermittent nature coupled with the lack of economically viable battery technology prevents us from utilizing those energy sources for 100% of global energy demand at this time. Uranium supply does not currently seem to be any significant constraint on the future of nuclear energy, and with future extraction technologies and breeder reactors, this is theoretically a non-issue for millennia. The capability of weaponizing nuclear fuel will be troubling as nuclear power is adopted around the world, but with a comprehensive monitoring and inspection process through the IAEA and United Nations, this concern can be put to rest as well. CO2 emissions must be reduced drastically, but it is up to policy makers to determine viable ways to allow developing countries to continue to experience economic growth while also making the switch to cleaner energy sources.


Armstrong, Ian. “Russia is creating a global nuclear power empire.” Global Risk Insights. October 29, 2015. (accessed November 3, 2015).

Bodansky, David. “The Status of Nuclear Waste Disposal.” American Physical Society 35, no. 1 (January 2006).

Davis, Steven J, and Robert H Socolow. “Commitment accounting of CO2 emissions.” Environmental Research Letters 9, no. 8 (2014).

Forbes. China Shows How to Build Nuclear Reactors Fast and Cheap. James Conca. October 22, 2015. (accessed October 23, 2015).

Hashem, Heba. “Recycling spent nuclear fuel: the ultimate solution for the US?” Nuclear Energy Insider. November 21, 2012. (accessed October 31, 2015).

IAEA. Nuclear Share of Electricity Generation in 2014. October 22, 2015. (accessed October 23, 2015).

International Energy Association. CO2 Emissions from Fuel Combustion. IEA, 2014, 54.

International Panel on Fissile Materials. India. February 4, 2013. (accessed October 16, 2015).

Landsberg, H. E., and M Pinna. “L’atmosfera e il clima.” In UTET, 63. Torino, 1978.

National Renewable Energy Laboratory. Direct Normal Solar Radiation. June 2004. (accessed October 25, 2015).

Oak Ridge National Laboratory. 2013 Global Carbon Project. U.S. Department of Energy, Carbon Dioxide Information Analysis Center, U.S. DOE Office of Science, 2013.

OECD NEA & IAEA. Uranium 2014: Resources, Production, and Demand. Report, OECD Nuclear Energy Agency, 2014, 9.

Reuters. German govt wants nuclear exit by 2022 at latest. Annika Breidthardt. May 30, 2011. (accessed October 23, 2015).

The Economist. Gauging the Pressure. April 28, 2011. (accessed October 23, 2015).

U.S. Energy Information Administration. AEO2014 projects more coal-fired power plant retirements by 2016 than have been scheduled. February 14, 2014. (accessed October 28, 2015).

U.S. Energy Information Administration. Annual Electric Generator Report. Report, Washington: U.S. EIA, 2013, Table 4.3.

U.S. Energy Information Administration. Cost and performance characteristics of new central station electricity generating technologies. Annual Report, Washington: U.S. EIA, 2015, Table 8.2.

University of Montana. “Wind power for coal power.” The Maureen and Mike Mansfield Center. Ethics and Public Affairs Program. 2008. (accessed October 25, 2015).

World Nuclear Association. Fast Neutron Reactors. October 2015. (accessed October 24, 2015).

—. Supply of Uranium. September 2015. (accessed October 24, 2015).

—. “Waste Management.” World Nuclear Association. December 2012. (accessed October 31, 2015).

World Nuclear News. New Trends in Financing. September 15, 2014. (accessed October 22, 2015).