The history of petroleum production since the nineteenth century is largely characterized by the technology that was used to extract it from the Earth. Throughout this history, there have been many paradigm shifts in which a new technological regime dramatically improved the efficiency of petroleum extraction, transport, refining, and use. One of the most recent of these paradigm shifts is the development of rotary steerable systems (RSS), which allow drilling operators to control the direction of the drill bit while the drill is still in operation, hence, while the drill is under load and underground. Though the task seems rudimentary, the technology required to achieve this degree of remote control is incredibly complex – among some of the most advanced technology that mankind has developed – including a highly sophisticated system of electronics that must operate reliably miles underground in some of the harshest conditions imaginable. These developments have been a critical component of the shale revolution, and is one of the only reasons that the modern fracking industry exists. We will explore what advances made these systems possible, and why a task as simple as drilling sideways isn’t as easy as it seems.



            The first engine-drilled commercial oil wells were established in the 1850s and 1860s in Canada and the United States, but it took many decades before producers became conscious of the utility of drilling wells at an angle that was not necessarily vertical.[i] The first widely known instance of a drilling rig intentionally drilling at an angle to achieve a specific objective occurred in 1934, when John Eastman and Roman Hines were able to drill slanted relief wells into an oilfield to reduce the pressure; an instrumental feat in reining in a well which had blown out. Eastman and Hines were featured in an issue Popular Science in May of that same year, where the magazine details Eastman’s primitive but effective surveying tool, “Into the hole went a single-shot surveying instrument of Eastman’s own inventions. As it hit bottom, a miniature camera within the instrument clicked, photographing the position of compass needle and a spirit-level bubble.” [ii] Indeed, the ability to “survey” and track the location, inclination (the difference in well angle from vertical), and azimuth (the compass direction) of the drill head is just as critical as having the hardware to drill in a given direction. Because micro-electronics had not yet been invented, early efforts relied on northing more than crude instrumentation such as Eastman’s camera.

The first piece of technology that truly laid the foundation for directional drilling was the development of what is today known as a “mud motor,” which is a type of progressive cavity positive displacement pump (PCPD) that is generally located immediately behind the drill bit in the drill string. Obviously the mud motors of today look vastly different from early versions, but they can trace their lineage all the way back to two patents, the first of which is entitled Drills for Boring Artesian Wells, filed by C. G. Cross in 1873, and the second of which is entitled Machine for Operating Drills, filed by M. C. and C. E. Baker in 1884.


The first piece of technology enabling some measure of directional drilling

Source: Cross, C. G. Drills for Boring Artesian Wells. United States of America,

Patent 142992. 23 September 1873.

The issue that both of these designs were attempting to address was the collapse or breakage of the drill string in rotary and reciprocating drilling operations, respectively. Because early steel was of such poor quality, rotating the entire drill string assembly from top to bottom induced these problems. Shallow wells could be drilled without issue, but the torsional stresses on the drill string only increased as the well got deeper due to the increasing mass of the string and the torsional flex that was inherent to the pipe or rods being used. Both Cross and the Bakers developed mechanisms in which the drill string could remain stationary while the drill bit turned, by pressurizing the drill string with fluids such as water, steam, or air, then using this pressure to drive a very rudimentary motor, i.e. the drill bit.[iii] [iv] By solving this problem, drill operators naturally discovered that they could use a section of slightly curved pipe in the drill string to influence the direction the drill, so long as the string remained stationary while the bit turned. They could install one length of curved pipe, lower the assembly to the “kickoff point” (the point in which a well begins to deviate from vertical), drill until they had reached their desired change in well inclination and/or azimuth, then raise the assembly and swap the curved pipe back to straight.

As earlier mentioned, the ability to track the location of the drill head is of the utmost importance in trying to steer a drill into a pocket of oil or gas. Early efforts in “measurement while drilling” (MWD) consisted of little more than pendulums to determine the inclination of the well, and compasses to determine the azimuth. However, pendulums were ineffective for deeper wells, and compasses often had issues when used inside of well casing as the steel interfered with the Earth’s magnetic field. This ushered in the era of micro-gyroscopes in the early 20th century, which are still used to this day.



            The difference between RSS and conventional systems begins at the kick-off point in the well. The systems used to steer the bit vary significantly among manufacturers, but there are two primary platforms that all manufacturers have built their systems upon: “push-the-bit” and “point-the-bit” designs. Push-the-bit systems rely on a small array of pads around the circumference of a sleeve located just behind the bit. This system uses internal hydraulic pressure to selectively actuate the pads outward to push against the side wall of the well; applying pressure to one side causes the bit to curve in the direction opposite of the pad(s) being utilized. By doing this, the tool is able to influence the direction of the bit, steering it while in operation.


A “push-the-bit” system. The pads are actuated in order to “push” the bit in the opposite direction.

Point-the-bit systems work in a very different manner. Their ability to steer the bit is achieved by using a variety of components to push on the drive shaft, creating a deflection in the bit by way of a fulcrum between the control unit and the bit. Companies have developed very unique and clever ways to create this bit deflection. For example, the company Weatherford uses a radial array of sixty-six electronically triggered and hydraulically actuated pistons to flex the shaft in the desired direction.[v] Halliburton uses a pair of nested eccentric rings through which the drive shaft turns. These rings are independently rotated to create the desired bit deflection.[vi]


Weatherford’s “point-the-bit” RSS


Halliburton’s “point-the-bit” RSS


Common to both of these systems is the need to have the control unit remain stationary or “upright,” whether the drill string is rotating or not, else, it would be impossible for the control systems to have any frame of reference from which to command the direction of travel. Many systems use blade-like devices mounted radially on the outside of the control unit, much like the aforementioned pads in push-the-bit systems, also akin to the fletching of an arrow. These devices, coupled with high-performance bearings on both the front and the rear of the control unit, create enough friction between themselves and the sidewalls of the well to prevent rotation of the tool. This is essential to maintain full control of the well’s direction.

Measurement while drilling methods have also advanced substantially from the early days of directional drilling. No longer is it necessary to use pendulums or down-hole cameras to determine the progression of the well. The first major advancement came with the development of what is called “mud pulse telemetry,” which is not in and of itself a tool to pinpoint the location of the drill head, but instead a method of data transmission. Because these drills, and by extension the entire drill string, are continuously pressurized with drilling fluid also known as “mud,” a clever innovator discovered that one could take this baseline pressure and modulate it over time. To create this variance in pressure, a system of electronics rapidly actuates a valve on the drilling platform to send data to the drill underground, and actuate a valve in the drill to send data back to the platform. These variances in mud pressure are essentially encoding the drilling fluid with information about the location, depth, inclination, and azimuth of the drill. Pressure differences are received as analog signals (which are then modulated into digital data) in real time by both a device coupled to the drill string on the rig, as well as a device within the control unit adjacent to the drilling head underground. Mud pulse telemetry is the most commonly used method of data transmission in drilling operations, as it is highly reliable and usually fast enough for most wells. Though current technology can reach bandwidth speeds of up to 40 bits/s, significant signal attenuation occurs with more depth, and these systems often transmit data at speeds far less than 10 bits/s. For highly irregular wells at great depth, low bandwidth can create a serious amount of downtime for the rig while crews wait for information to be exchanged, thus other methods may be employed.[vii]

Electromagnetic (EM) telemetry has emerged more recently as a system far superior to mud pulse telemetry in certain situations. EM systems utilize a voltage difference between the drill string and a ground rod driven into the Earth some distance away from the well. Though this system can transmit data much faster than mud pulse telemetry for shallow wells and wells that are drilled using air as opposed to a liquid, the electromagnetic signal attenuates very rapidly with well depth. Within the last decade, some companies have also developed drilling pipe that incorporates a wire into the pipe wall. This wire is connected from stick to stick of pipe, and can offer data transfer speeds orders of magnitude faster than either of the previously mentioned methods; over 1,000,000 bits/s. Using a system like this requires drill operators to be much more attentive to the process of building the drill string, ensuring that each connection is resilient enough to withstand the harsh environment it will operate in down-hole. This is the future of MWD, but until this technology becomes common among drillers, manufacturers of these components will be unable to realize the economies of scale that come with mass production, and that ultimately make these components cheaper and more reliable to use.[viii]

This collection of technologies has enabled drilling operators to reach distances that their predecessors likely couldn’t even comprehend. It is not uncommon for a horizontal drilling operation to achieve lateral distances of over one mile, though some operations have done much more. In 2016, Halliburton and Eclipse Resources drilled the longest horizontal well in the U.S., exceeding 18,500 feet drilled horizontally.[ix] This still doesn’t come close to beating the record set by Maersk Oil in 2008, when they completed an offshore well in Qatar that had a horizontal section of 35,750 feet in length. Highlighting the precision of this technology, all seven miles of this Qatari well were drilled through a reservoir target only twenty feet in thickness under the sea floor.[x]



It is only with highly advanced rotary steerable systems and measurement while drilling methods that operators are able to achieve these feats of nature. Directional drilling offers several fundamental advantages over conventional drilling techniques, the first and perhaps most obvious of which is the ability to drill more wells from the same platform or rig. Whereas in the past drillers had to construct many wellheads in close proximity to each other to increase the rate of resource extraction from a single reservoir, directional drilling allows operators to use a single wellhead to access dozens of different wells. This has had a monumental effect on increasing the efficiency of resource recovery operations, introducing an economies of scale into the process that dramatically reduces the cost of infrastructure and reduces the amount of time labor spends shifting from location to location. This has the effect of permanently increasing the productivity of assets under management by several fold.

Directional drilling also eliminates topographical constraints with respect to where the drilling rig and wellhead are located. This means that drillers are no longer forced to haul materials off-road, over mountains, or through rivers to locate the rig above a prospective reservoir. Far fewer roads and bridges have to be constructed to make the rig regularly accessible. One can imagine a multitude of circumstances in which it could be immensely troublesome to clear forests, fill in streams, or level the side of a mountain just to create a good working surface for drillers, but with these systems, drillers can locate the wellhead wherever it is easiest. In this way, this method of resource extraction drastically reduces the amount of environmental destruction that occurs in this industry. Additionally, resources located under small bodies of water can be extracted from shore, and resources located under cities or populated areas can be safely extracted from a safe distance.

In comparison to the directional drilling technology of the mid twentieth century, rotary steerable systems eliminate the downtime drillers were previously subjected to when they had only curved pieces of pipe to use in order to influence the well direction. Drillers pulled the entire drill string out of the well to insert a single piece of curved pipe in order to drill a dog-leg angle, but would then have to pull the drill string out again to convert the drill back to its previous state to continue in a straight line once they had reached the desired inclination and/or azimuth. Disassembling the drill string is a time-consuming process, thus, being able to control the drill while in operation eliminated all of this downtime, shaving days or weeks off of the project. This further reduces operational costs to the company, and by extension, oil and gasoline prices to consumers.

Directional drilling has also been a critical component to the growth in market share of shale oil and gas, and had this technology not been developed, the shale revolution would have not been possible. Conventional fossil fuel resources have historically been extracted from homogeneous formations whereby companies only needed to pierce the formation and let the immense pressure deliver the resource to the surface. With respect to shale-derived gas, horizontal drilling is necessary as the resource exists trapped within a non-fluid substrate; thus, one must impart a destructive force on the substrate itself to allow the resource to flow. This means that the development of a well requires drillers to have a presence in as much of the formation as possible to maximize the total volumetric space that they can fracture. Due to way in which fossil fuels were formed, reservoirs are typically very thin relative to the overall area that they occupy, which means that drillers must be able to steer their tools horizontally so as to maximize the surface area of the well that is in contact with the resource. In practice, this means that gas extraction from shale involves a great deal more horizontal drilling than vertical. Moreover, individual shale formations can rise and fall with the topography of the landscape they exist under, necessitating precision guidance of the drill to stay within the confines of the formation. In short, extracting shale resources would not be economically feasible without directional drilling.


As one can see, modern directional drilling and RSS technology required nearly two centuries of incremental innovation to make the technology what it is today. In many ways, this technology is a testament to the persistence of mankind to improve the efficiency of business operations, never relenting in their pursuit to reduce costs and maximize profits. Innovators will stop at nothing to create new ways to more efficiently extract, transport, refine, and use our natural resources, making the resulting commodities cheaper and more accessible to the disadvantaged communities of the world. Directional drilling has unlocked reservoirs of fossil fuels that only a decade ago were thought to be far too costly to extract, but this calculus has undergone a complete transformation given the advanced systems available today. Given how formative rotary steerable systems were for the past two decades of oil and gas extraction, one can only imagine what is in store for the industry in the future.



[i] Oil Museum of Canada. Oil Springs. 21 November 2015. 10 February 2017. <;.

[ii] American Oil & Gas Historical Society. Technology and the Conroe Crater. 2017. 9 February  2017. <;.

[iii] Baker, M. C. and C. E. Baker. Machine for Operating Drills. United States of America: Patent 292888. 5 February 1884.

[iv] Cross, C. G. Drills for Boring Artesian Wells. United States of America: Patent 142992. 23 September 1873.

[v] Weatherford. Revolution High-Dogleg. 2017. 19 February 2017. <;.

[vi] Halliburton. SOLAR Geo-Pilot XL Rotary Steerable System. 2017. 7 February 2017. <;.

[vii] Wassermann, Ingolf, et al. “Mud-pulse telemetry sees step-change improvement with oscillating shear valves.” Oil and Gas Journal 106.24 (2008)

[viii] National Oilwell Varco. IntelliServ. 2017. 11 February 2017. <;.

[ix] World Oil. Halliburton, Eclipse Resources complete longest lateral well in U.S. 31 May 2016. 8 February 2017. <;.

[x] Gulf Times. Maersk drills longest well at Al Shaheen. 21 May 2008. 14 February 2017. <;.



            Greenhouse gas (GHG) emissions from anthropogenic sources, more specifically emissions of carbon dioxide (CO2) through energy intensive processes, are predicted to accelerate on a non-linear trend that began at the onset of the Industrial Revolution in the 19th century. The growing concentration of CO2 from the combustion of fossil fuels has begun to warm the atmosphere, and beginning only a few decades ago, society became aware of not only the presence of more GHGs in our atmosphere, but also became conscious of the implications with respect to human civilization. Several different sources of renewable energy have become popular and are marketed as substitutes for fossil fuels, such as photovoltaics and wind. However, these sources are still largely too expensive to be adopted by utilities that have traditionally provided their customers with a pricing structure rooted in cheap electricity from coal. Compared to these renewable sources, nuclear power generation has some advantages that humans will have to rely on in the coming decades while manufacturing processes are optimized to make solar, wind, and battery technologies more economically viable.


            If there is one historically unifying theme for nuclear power, it is that it seems to have been consistently under-utilized. 439 nuclear reactors are in operation today, in thirty countries, with only fourteen of those countries generating more than 20% of their overall electricity consumption with nuclear. Only three countries, Hungary, Slovakia, and France use nuclear sources to provide the majority of their electricity (IAEA 2015). This trend seems to be due to several different factors, the first of which being the relatively high capital and financing cost of constructing a new nuclear power plant, coupled with long construction time scales. The upfront capital required for a nuclear plant is higher than any other energy generation technology with the exception of offshore wind power and coal-gasification integrated combustion cycle (IGCC) with carbon capture and storage (CCS). Total overnight cost, which includes engineering, procurement, and construction costs, but also excludes interest on financing, came to $671/kW for modern combustion turbine designs that utilize coal as fuel. This is in very stark contrast to modern nuclear reactor installations that have recently averaged $5,366/kW total overnight cost in the United States (U.S. Energy Information Administration 2015). It is easy to see why utilities prefer traditional fossil fuel plants with such a substantial difference in upfront costs, but this is only a piece of the story.

            Compared with every other energy generation technologies, a new nuclear plant takes longer than any other installation to bring online without even considering regulatory or political roadblocks. While a conventional coal or gas plant could be completed in about three years from date of order, a nuclear plant is expected to take about eight years (U.S. Energy Information Administration 2015). Add to this the nearly unanimous tendency for nuclear projects to be delayed while still under construction, and you have a nuclear plant that in some cases ends up costing 75% more than initially estimated (World Nuclear News 2014). This is a tremendous investment risk for private investors, governments, or utilities, and has consistently made the estimation of plant cost wildly unpredictable. Delays on nuclear projects are so common that a project being completed on time has become the exception rather than the rule.

            Capital and financing issues are not the only problems with trying to bring new nuclear online. Beginning with the first major nuclear accident in Chernobyl, nuclear has been characterized as being unacceptably dangerous in many countries around the world. There have been countless public demonstrations against nuclear power, with every subsequent nuclear accident revitalizing public opposition and sparking mass protests throughout the world. The meltdown of three reactors at the Fukushima Daiichi power plant in 2011 is classified as the second worst nuclear accident in history and has resulted in a massive international response, despite no deaths having been attributed to the disaster at this time. Most notably, Chancellor Merkel of Germany made a commitment in the days that followed the Fukushima accident to accelerate the decommissioning of all of Germany’s nuclear reactors (Reuters 2011). It is also estimated that this disaster alone will result in only half the previously estimated new nuclear capacity by 2035 due to public concerns, though some of the major markets such as India and China are expected to proceed with earlier plans to vastly increase their share of nuclear power capacity (The Economist 2011). China has since reiterated its commitment, with a planned completion of 129 new reactors by the year 2030 citing considerable public health concerns with air quality in major population centers, as well as an increasing international focus on the mitigation of CO2 emissions (Forbes 2015). Nuclear disasters have been a regular source of waning interest in the nuclear sector for decades now, but there does seem to be momentum in the industry. Nuclear electricity generation is a very effective way to offset CO2 emissions, and the politicians know this, but the public is cautious. Just as the Ukrainian ghost town of Pripyat was contaminated with radioisotopes after the meltdown at Chernobyl, the phrase “nuclear power” has been contaminated with ideas of apocalypse and catastrophe. There must be a relentless focus on improving reactor technical safety and redundancy, as well as improving human safety protocol and inspection measures to reassure the public that this technology is overwhelmingly safe when compared to other forms of energy generation.

            The issue of nuclear waste handling has been a source of contentious debate for decades, and will likely continue well into the future. Traditionally, many countries treated all of the spent nuclear fuel as waste; despite the spent fuel containing only <5% undesirable actinides and fission products. This “spent” fuel can be reprocessed to separate the undesirable contents from the portions of uranium and plutonium that are still valuable, but because reprocessing this fuel requires relatively expensive infrastructure and new nuclear fuel is relatively cheap, it has never been in the economic interests of a company or government to build a fuel reprocessing plant. Despite this, many countries have done just that, partly to reduce fuel costs by recycling the spent fuel, but also because reprocessing makes waste disposal much easier by concentrating the dangerously radioactive waste into a smaller volume. The United States has had a very lackluster history with regard to nuclear waste disposal, choosing not to reprocess spent fuel and also to simply store it in sealed “dry-casks” above ground (Hashem 2012). This is a very inexpensive method of disposal, but many argue is unsustainable and presents unacceptable levels of risk to the public.

            If humanity is to rely on nuclear power to a greater extent in the future, the nuclear fuel cycle must be normalized among countries to create continuity, consistency, and predictability. This would involve a commitment by every country with nuclear capacity to extract the most radioactive actinides from spent fuel to reduce the volume of waste, and then permanently “freeze” this waste in an insoluble compound such as borosilicate glass or synthetic rock. This ensures that if this high level waste were to ever come into contact with water, it could not leach out and pose a health concern. Countries with nuclear capacity also need to normalize the way in which they ultimately dispose of this high level waste after reprocessing. Projects are underway in several different countries at this time to develop geological repositories up to 500m deep, well below the water table in a majority of the world (World Nuclear Association 2012). These repositories must be subject to several constraints, including but not limited to proximity to population centers, proximity to aquifers, and geological stability. This strategy provides multiple layers of protection against an accidental radiological release into the biosphere, and seems to be the best nuclear waste policy at this time. It is likely, though, that the populations of people closest to these sites will be vehement opponents of this disposal process, making it politically difficult to implement.


            There are no CO2 emissions associated with producing electricity at a nuclear power plant, and also no harmful emissions like mercury or sulfur that are produced when burning coal. An estimated 60 GW of coal capacity is estimated to be retired by 2020 in the United States with the advent of new Mercury and Air Toxics Standards (MATS) that requires “significant reductions in emissions of mercury, acid gases, and toxic metals,” and the Clean Power Plan that limits CO2 emissions by plant type (U.S. Energy Information Administration 2014). This means that it is currently advantageous to begin planning base load replacement in the form of nuclear plants. Because an average sized coal plant in the United States has a capacity of 250 MW, and an average sized nuclear plant has a capacity of just over 1 GW, the net effect to overall grid capacity would be insignificant if society were willing to invest in one nuclear plant for every four coal plants that closed. Coal provides about 30% of the United States power, and a good starting point for a long term CO2 mitigation strategy would be to aim to replace all coal power with nuclear sources, leaving the last 70% to ultimately be entirely comprised of wind, solar, and hydroelectric capacity (U.S. Energy Information Administration 2013). For every 1 GW of coal capacity replaced by nuclear capacity, on average, 4 MtCO2 is avoided per year. Hypothetically, if all coal capacity in the United States were replaced with nuclear, 1.38 GtCO2 would be avoided annually; equivalent to 4.27% of global CO2 emissions (Davis and Socolow 2014).

            There is no doubt that renewable energy sources must play a substantial role in the future, but nuclear must still be a critical component of the world’s electricity generating capacity for many years to come. There are several reasons for this, with the first being that the nature of wind and solar power is inherently intermittent. Wind power output can usually be assumed to be equivalent to the turbine operating at 100% capacity for 2,200 hours per year, and for most of the United States, solar can be assumed to generate electricity for about 2,400 hours per year (Landsberg and Pinna 1978). This is in contrast to coal, natural gas, and nuclear plants that can be assumed to operate around 8,000 hours per year, with the only downtime being for maintenance or when electricity demand falls to a point that utilities would lose money by keeping a plant operating. What this means is that there must be a great deal of innovation in the energy storage sector to store power generated by wind or solar for use at night or when the wind is not blowing, likely in the form of lithium ion batteries, or it means that we must supplement a renewable energy system with plants that we have the ability to turn off and on. Because advanced battery technology is still prohibitively expensive, this means that we must use nuclear in conjunction with wind and solar to provide a base load of power, support the grid in times of need, and to offset CO2 emissions.

            Another disadvantage that wind and solar possess is that the electricity generation potential varies wildly from region to region, even sub-nationally. In the United States, states in the northwest such as Montana and Idaho have wind potential areas that are upwards of 1000W/m2, whereas many of the southern states like Louisiana, Mississippi, Alabama, and Florida have no wind potential at all aside from possible offshore installations, and even most of those top out at a potential of only 100W/m2 (University of Montana 2008). With solar resources, states in the southwest such as California, Arizona, and Nevada have vast portions of their states that receive over 9.0kWh/m2/day in June, while states in the northeast like New York, Pennsylvania, and Massachusetts receive in the range of 4.0-5.0kWh/m2/day, making it twice as effective to generate solar power the southwest (National Renewable Energy Laboratory 2004). The implications of this are that it would be inefficient to generate power from these renewables in the “wrong” locations, and that it would make more sense to generate in energy dense locales and transmit the power elsewhere, though this would vastly increase transmission loss.      The point is that renewable energy is geographically and topographically constrained to a much more considerable degree than nuclear is. It will be necessary to locate nuclear plants in places that do not make solar or wind “sense,” with the only local requirement being some source of freshwater for cooling. This makes nuclear versatile compared to coal and natural gas as well, because coal plants are often located as close to the source of the coal itself due to transportation cost, and natural gas transportation requires expensive underground pipelines. Nuclear reactors consume a very small amount of fuel per kWh in terms of volume and weight when compared to these sources, thus alleviating many logistical concerns that the fossil fuel industry must face.


            Though nuclear is undoubtedly preferable to fossil fuels for the sake of GHG emissions, we must bear in mind that uranium is a finite natural resource. The consequence of this is that nuclear power, using current infrastructure and identified resources, is a medium-term solution for energy scarcity at this time, and cannot be considered the end-all solution to energy scarcity with current reactor technology. Globally, current identified resources of uranium in all cost categories are estimated at 13.5Mt, and at 2012 rates of uranium consumption this would provide a 120-year global supply. Undiscovered resources, estimated based off of geological data and regional geographic mapping, are estimated to amount to about 7.7Mt. Based on identified resources alone, this implies that a doubling of nuclear capacity would in principle reduce this supply from 120 years worth of uranium to 60. Fortunately, relatively higher uranium prices have resulted in more exploration for new sources. Because of this, total identified resources increased 10.8% in only the two years from 2011-2013 (OECD NEA & IAEA 2014). A trend like this will do well to mitigate any concerns utilities or governments have about a uranium shortage within the next several decades. Uranium also exists in seawater at a concentration of 0.003ppm, and could potentially be extracted if land resources became difficult enough to mine. Early research suggests that if uranium prices exceed $600/kg, it could become profitable to extract from seawater (World Nuclear Association 2015).

            One breath of fresh air with respect to potential uranium scarcity is the breeder reactor design, which operates in a fundamentally different manner than the widely used boiling water reactors (BWR), pressurized water reactors (PWR), and CANDU reactors. The advent of the nuclear era brought with it the idea that uranium was a scarce element, which directed focus to research on new reactor designs that could more efficiently extract energy from the fuel. This is the primary benefit of a breeder reactor, superior fuel economy when compared to conventional designs. Breeder reactors achieve this by not “moderating” the neutrons produced in fission, which is the role of water in nearly all reactors currently in operation. The presence of water slows the neutrons and makes them more apt to be captured by and split fissile nuclei like 235U, and also less likely to be captured by the non-fissile 238U that comprises a majority of the nuclear fuel. Without a neutron moderator, the neutrons possess a much higher energy, allowing them to more easily be captured by the non-fissile 238U, which captures two protons and converts to fissile 239Pu. Breeder reactors can also use a thorium fuel cycle, which increases efficiency even more. The takeaway is that a breeder reactor creates its own fuel that can be reprocessed into a suitable nuclear fuel on a regular basis. Where conventional reactors can extract less than one percent of the potential energy in terms of the amount uranium ore it takes to produce a viable nuclear fuel, breeder reactors can increase this “by a factor of about 60,” (World Nuclear Association 2015). This principle in conjunction with the theoretical viability of seawater uranium extraction effectively turns nuclear fuel into a source of renewable energy.

            Breeder reactors have the added benefit of addressing a large portion of the nuclear waste problem. Actinides, heavy elements that are not a product of a split atom but rather a neutron capture, are the primary source of radioactivity in traditional nuclear waste. Because of the un-moderated higher energy neutrons in a breeder reactor, these actinides become an actual part of the fuel cycle. Because of this design, a breeder reactor can theoretically burn all of these actinides and leave only lighter and less radioactive fission products. Due to geometric and physical constraints of the fuel, however, this can only be achieved with the continuous reprocessing of fuel (Bodansky 2006). The breeder reactor shows future promise, though it will likely not become popular for new installations until uranium ore is sufficiently expensive and radioactive waste storage capacity becomes overwhelmingly problematic.


            How can society encourage the mass adoption of nuclear power in countries that are not yet concerning themselves with clean energy? Currently, the world powers are reluctant to assist in bringing new nuclear capacity to developing countries for various reasons, but perhaps most of all because of national security concerns. There is an undeniable risk that a nuclear power program, no matter the specifics, could provide some degree of a framework for a terrorist group or a rogue administration to develop nuclear weapons. There are possible solutions to nuclear weapons proliferation, though nearly all would be immensely politically challenging to implement because they all require some level of oversight and capacity for verification.

            First, the world powers could mandate that new nuclear capacity in potentially problematic countries must be constructed using the Canadian Deuterium Uranium (CANDU) reactor design, which permits the usage of fuel without enrichment. This is important, as uranium enrichment infrastructure can essentially be thought of as the tool that enables the creation of a nuclear bomb. The CANDU design nullifies this because the reactor is designed to use heavy water, water that possesses a hydrogen atom with an atomic mass of two otherwise known as deuterium, to moderate neutron emission of the fuel. The heavy water itself will capture fewer neutrons and works in some respects similar to the aforementioned breeder reactor, enabling higher energy neutrons to be captured by the 238U which transitions to fissile 239Pu, resulting in the existence of overall criticality in the reactor without enriched fuel.

            However, there are still concerns with the CANDU design. There are only a handful of heavy water manufacturers in the world, and shipping large quantities could be logistically challenging; and this is without even considering its exorbitant cost of several hundred dollars per kilogram. Also, as mentioned, CANDU reactors rely on creating much of their power through the fertilization of 238U to convert it to 239Pu. Provided a reasonable fuel reprocessing facility and scientific know-how, this 239Pu can be isolated from the rest of the waste and can potentially be used to create a nuclear warhead, assuming there are large enough quantities available. Tritium is also an incidental creation in a CANDU reactor when the deuterium atom captures another neutron. Tritium can be used to create a nuclear fusion reaction that can drive a traditional fission reaction, and greatly increases the amount of energy released when arranged in the proper way. This is what is called a “two-stage nuclear,” “thermonuclear,” or “hydrogen” bomb, and is the most powerful weapon that is publicly known to have been created. This tritium can also periodically be harvested from the heavy water in the reactor (International Panel on Fissile Materials 2013). Because this waste poses a hypothetical danger in the wrong hands, the International Atomic Energy Association (IAEA) must be able to monitor the nuclear waste produced in these countries to ensure that none is being diverted to a covert processing plant with the intent of weaponizing the material.

            Lastly, the world powers could develop a global supply chain through the IAEA, whereby only carefully vetted vendors from trusted sources are allowed to enrich, manufacture, and transport nuclear fuel. This would facilitate the use of conventional and cheaper light water reactor (LWR) designs, eliminating concerns about incidental tritium production, heavy water availability, and most of the concerns with regard to the incidental production of 239Pu. In addition to total supply chain management, including nuclear waste management, there must be surveillance capacity at nearly every stage of the nuclear electricity generation process. This strategy also prevents enrichment infrastructure, which is perhaps the most important concern, but the greatest challenge with this approach is the political feasibility. A country would be required to allow a United Nations agency to essentially trample on their sovereignty by having the authority to inspect virtually any facility anywhere at any time within their borders. Requiring the countries to purchase fuel from verified vendors that are likely to exist in some other country is an additional aspect they are unlikely to find ideal. The very existence of this sort of an agreement does not particularly foster a friendly political relationship, and rests on the presumption that the new nuclear country cannot be trusted.

            Assuming the later mechanics of inspection and verification process can be achieved, there is still the question of how new nuclear projects will be developed and financed. Working through the United Nations Framework Convention on Climate Change (UNFCCC) and the Special Climate Change Fund (SCCF), new nuclear proposals from developing countries or economies in transition could be refined by experts in the nuclear industry to ensure creditworthiness. The SCCF could provide lower interest debt financing compared to what the project would alternatively receive from the private sector, or use a cooperative equity model to mitigate more of the upfront costs to the recipient utility or government. Either of these strategies would help to encourage the adoption of nuclear over fossil fuel plants in countries that are expanding their energy capacity at this time, but would also ensure a return on investment for the SCCF. Also, Russia has been quietly securing contracts with other countries under a “build-own-operate” system, in which the Russian government uses their own nuclear technology to build and permanently operate a reactor in another country. This is advantageous for both parties, but has some geopolitical implications as “Russian-built nuclear power plants in foreign countries become more akin to embassies — or even military bases — than simple bilateral infrastructure projects,” (Armstrong 2015). Regardless, without the barrier of billions of dollars of upfront capital and financing costs, it is likely that nuclear projects will look more attractive to prospective countries. This is especially true when considering that nuclear is not subject to any hypothetical carbon taxes that may ultimately be introduced at a national or international level.


            Nuclear power will face a multitude of challenges going forward, from political and public opposition, waste management, and lack of capital for project finance, but will nonetheless remain a technology we are to rely on if we wish to deviate significantly from the amount of CO2 we are emitting. Wind, solar, and hydroelectric power capacity will certainly be of the utmost importance over the next century, but their intermittent nature coupled with the lack of economically viable battery technology prevents us from utilizing those energy sources for 100% of global energy demand at this time. Uranium supply does not currently seem to be any significant constraint on the future of nuclear energy, and with future extraction technologies and breeder reactors, this is theoretically a non-issue for millennia. The capability of weaponizing nuclear fuel will be troubling as nuclear power is adopted around the world, but with a comprehensive monitoring and inspection process through the IAEA and United Nations, this concern can be put to rest as well. CO2 emissions must be reduced drastically, but it is up to policy makers to determine viable ways to allow developing countries to continue to experience economic growth while also making the switch to cleaner energy sources.


Armstrong, Ian. “Russia is creating a global nuclear power empire.” Global Risk Insights. October 29, 2015. (accessed November 3, 2015).

Bodansky, David. “The Status of Nuclear Waste Disposal.” American Physical Society 35, no. 1 (January 2006).

Davis, Steven J, and Robert H Socolow. “Commitment accounting of CO2 emissions.” Environmental Research Letters 9, no. 8 (2014).

Forbes. China Shows How to Build Nuclear Reactors Fast and Cheap. James Conca. October 22, 2015. (accessed October 23, 2015).

Hashem, Heba. “Recycling spent nuclear fuel: the ultimate solution for the US?” Nuclear Energy Insider. November 21, 2012. (accessed October 31, 2015).

IAEA. Nuclear Share of Electricity Generation in 2014. October 22, 2015. (accessed October 23, 2015).

International Energy Association. CO2 Emissions from Fuel Combustion. IEA, 2014, 54.

International Panel on Fissile Materials. India. February 4, 2013. (accessed October 16, 2015).

Landsberg, H. E., and M Pinna. “L’atmosfera e il clima.” In UTET, 63. Torino, 1978.

National Renewable Energy Laboratory. Direct Normal Solar Radiation. June 2004. (accessed October 25, 2015).

Oak Ridge National Laboratory. 2013 Global Carbon Project. U.S. Department of Energy, Carbon Dioxide Information Analysis Center, U.S. DOE Office of Science, 2013.

OECD NEA & IAEA. Uranium 2014: Resources, Production, and Demand. Report, OECD Nuclear Energy Agency, 2014, 9.

Reuters. German govt wants nuclear exit by 2022 at latest. Annika Breidthardt. May 30, 2011. (accessed October 23, 2015).

The Economist. Gauging the Pressure. April 28, 2011. (accessed October 23, 2015).

U.S. Energy Information Administration. AEO2014 projects more coal-fired power plant retirements by 2016 than have been scheduled. February 14, 2014. (accessed October 28, 2015).

U.S. Energy Information Administration. Annual Electric Generator Report. Report, Washington: U.S. EIA, 2013, Table 4.3.

U.S. Energy Information Administration. Cost and performance characteristics of new central station electricity generating technologies. Annual Report, Washington: U.S. EIA, 2015, Table 8.2.

University of Montana. “Wind power for coal power.” The Maureen and Mike Mansfield Center. Ethics and Public Affairs Program. 2008. (accessed October 25, 2015).

World Nuclear Association. Fast Neutron Reactors. October 2015. (accessed October 24, 2015).

—. Supply of Uranium. September 2015. (accessed October 24, 2015).

—. “Waste Management.” World Nuclear Association. December 2012. (accessed October 31, 2015).

World Nuclear News. New Trends in Financing. September 15, 2014. (accessed October 22, 2015).