By Marc Cram, Director of Sales, Server Technology
In the 1880s, Thomas Edison and Nikola Tesla battled for the nation’s energy contract in what is now known as the War of the Currents. Edison developed direct current (DC) and it was the standard in the U.S. However, direct current is not easily converted to higher or lower voltages.
Enter Edison’s nemesis, Nikola Tesla. Tesla believed that alternating current (AC) was the solution to the voltage problem. Alternating current reverses direction a certain number of times per second and can be converted to different voltages relatively easily using a transformer.
It’s extraordinary to think that after all this time, there is still an AC/DC conundrum happening and—nowhere is it more prevalent than in the data center power flow that churns the workload, to supply the applications for our digital lives. Consider that even when alternative energy is brought into the mix, these production technologies initially produce DC power and AC power is still being delivered to the IT racks within the data center.
Progress Since the War of the Currents
According to the U.S. Energy Information Administration, the United States has relied on coal, oil and natural gas to provide the majority of the energy consumed since the early 1900s. Nuclear energy was once seen as the clear successor to coal for domestic electricity generation in the U.S., but a series of mishaps over the years has delayed, perhaps permanently, the widespread adoption of nuclear power. In addition, incidents at Three Mile Island (U.S.), Chernobyl (USSR/Russia) and Fukushima (Japan) have made it difficult in the minds of many to justify the growth of nuclear power plants as a source of electricity. Not to mention, the waste byproducts of nuclear fission. The decades-long fight over a long-term storage facility at Yucca Mountain in Nevada has forced many facilities to retain their spent nuclear fuel on site.
And in the early 2000s, solar power was first considered as a potential alternative to carbon-based sources of energy. However, the cost per kWHr has traditionally had difficulty reaching parity with power from coal, oil and natural gas until recently. Even with decades of drawbacks, it’s still vitally important to pursue renewable forms of energy.
The EIA defines renewable energy as “energy from sources that are naturally replenishing but flow-limited. They are virtually inexhaustible in duration but limited in the amount of energy that is available per unit of time.”
It’s important to keep the EIA definition in mind as data center builders are considering renewable power. A mix of renewable power is important because it lessens the strain on the local utilities while also helping them to meet local, state and federal requirements for alternative energy use.
New Alternative Energy Current War?
Since 2001, the uptake of renewable energy has seen slow but steady progress. The figure below provides a detailed breakout of the types of renewable energy used in 2017, with biomass, hydroelectric and wind being the top sources.
Source: https://www.eia.gov/energyexplained/?page=renewable_home
As previously mentioned, the power train supplying a data center has historically been implemented using AC power. From generation to transmission to point of use the power has been AC that gets stepped up or stepped down in voltage, as needed before being converted to DC power by the power supply residing in a server, network switch, router, load balancer or storage application.
On the other end of the power spectrum, many of the renewable energy sources inherently generate one form of electricity or another. Photovoltaic (PV) solar cells generate DC. Biogas and natural gas-powered fuel cells also generate DC. In order to be used in most data centers, the DC power from solar farms and fuel cells goes through an inversion process that turns DC into AC power. This allows the electricity to be transmitted efficiently across a distance and to be put back into “the grid” when not being put into energy storage systems or into loads such as data centers.
Regardless of renewable energy sources, data center locations are still primarily chosen for their proximity to cheap, reliable AC power from one or more utility provider power sources. However, by using renewable energy sources such as wind, solar, fuel cells and hydroelectricity to power data centers, companies can minimize power transmission and conversion losses, reduce their perceived carbon footprint and gain control over their sources of energy production. This allows them to grow their data centers to meet customer demands while complying with local, state and federal environmental impact laws.
But here is the rub: data centers are not situated close enough to the windmill, or to the dam supplying the hydroelectric power, requiring that the data center relies on an AC infeed supplied by a utility to get the electricity from the point-of-generation to the point of consumption. Google’s Sustainability Report underscores this statement by saying, “The places with the best renewable power potential are generally not the same places where a data center can most reliably serve its users. And while our data centers operate 24/7, most renewable energy sources don’t — yet. So, we need to plug into the electricity grid, which isn’t currently very green.”
Who’s Doing Their Part?
Contrary to the “rub” statement, good energy precedent is being set by some of the largest data centers, processing the biggest workloads.
Microsoft has been a notable pioneer in attempting to re-think the power train for their data centers. For example, their data center in Cheyenne, WY is powered from a biogas source supplying fuel cells on site. More recently, Microsoft built an evaluation lab that brings natural gas to the top of the rack and uses fuel cells located there to convert the gas to DC power that is consumed directly by the devices in the rack. This saves on power transmission losses and conversion losses at the expense of deploying some potentially costly fuel cells.
Facebook is also leveraging renewable energy sources and it is predicted that by 2020, the company will have committed to enough new renewable energy resources to equal 100 percent of the energy used by every data center they have built.
For Google’s part, in the same Sustainability Report, as previously mentioned, the company says, “In 2017 Google achieved a great milestone: purchasing 100% renewable energy to match consumption for global operations, including our data centers and offices. We’re investing in a brighter future for the whole industry. And we’re going beyond investing in renewable energy for our own operations—we want to grow the industry as a whole. Not only have we invested $3 billion in renewable energy projects, but we freely share technology that might help others study and respond to environmental challenges.”
Conclusion
The transition to higher adoption of renewable energy production continues for both utilities and consumers while being led and paid for by the largest internet properties around the globe. By working with the utility companies to develop large renewable energy production facilities and committing to purchase the outputs, the data center giants are leading the way to meet the clean energy needs of their businesses and communities.
While renewable energy coming from these projects is still a mix of AC and DC power, in the end, AC power is the common intermediary that joins the point-of-production to the utility and on to the point-of-consumption at the data center. Thus, most enterprise and cloud data centers rely on AC power to run their IT infrastructure. Sorry Edison, this is the “elephant” in the room, conclusion.
Whether your data center is using renewable energy or not, AC power is still the primary infeed to a data center, and AC power is distributed within the data center to the IT rack. For those data centers electing to remain with this tried and true approach, look for ways to help, such as using an intelligent PDU that supports an infeed of 415VAC 3-phase to deliver 240VAC to the outlet without requiring a transformer in the PDU. This helps to minimize conversion and distribution losses, minimize the size of copper cabling required for power and enable maximum power density for the rack, resulting in a greener, more efficient data center.
Bio
Marc Cram is Director of Sales for Server Technology (@ServerTechInc), a brand of Legrand (@Legrand). Marc is driven by a passion to deliver a positive power experience for the data center owner/operator. Marc brings engineering, production, purchasing, marketing, sales and quality expertise from the automotive, PC, semiconductor and data center industries together to give STI customers an unequalled level of support and guidance through the journey of PDU product definition, selection and implementation. Marc earned a BSEE from Rice University and has over 30 years of experience in the field of electronics.