Home » DCIM

Tag: DCIM

IoT’s Impact on the Data Center and the Role of Intelligent Power

By Marc Cram, Director of New Market Development, Server Technology

Once dubbed the next Industrial Revolution, the Internet of Things (IoT) has proven to be the movement that will drive the evolution of network, IT, and data center design into the future. To sum up the net impact of all of the new devices situated at the edge of all of the networks, consider this: there will be some 24 billion Internet of Things devices online by the end of 2020, which is actually more than double the 10 billion devices that will be used directly by people. Intelligent PDUs will play a critical role in the management of networks that support that traffic.

In fact, IoT has had a number of impacts on data center infrastructure, as well as data center services. Not only has IoT driven the creation of more robust networks and IT systems, it has also pushed the boundaries of what was previously understood as cloud and edge computing, and the networks that support those systems.

Lean and mean

When we look at the impact of IoT on data center infrastructure, the greatest tangible effect has been on data center networks. Most facilities have had to adapt in order to keep up with IoT—especially 5G IoT. This has meant an increase in the number of connections and in the overall speed of networks in most deployments, even ones that lean heavily on edge computing. Those edge devices still need to push data back to a central hub for more detailed computing and analysis.

Because of this, the majority of data centers are upping their networking and connectivity game. Another key impact IoT brings to data centers is a different type of capacity demand. IoT devices are continually running and delivering data, meaning that many data centers now have a much smaller window than before to take a network offline or make adjustments. Traditional maintenance windows are now closed, and network architectures have to be adapted to support uptime. The impact on data center infrastructure? It needs to be equally flexible.

More secure

An unexpected impact of IoT on data centers has been the need for an increased security presence at the edge. This new security challenge is the unwanted passenger on the train of network safety. It is the result of having more passengers on the new IoT touchpoints and endpoints.

This increase in the number of devices has presented a unique challenge for those in charge of their company’s networks. The proliferation of traffic has meant that companies are investing in new tools to monitor and manage traffic on their networks. While these tools are mostly in the form of software and IT appliances, there has also been an increase in the adoption of network PDUs.

Everything needs power

While they may seem like an unlikely player in new IoT data center infrastructures, intelligent PDUs are serving a key role in securing networks, supporting uptime, monitoring traffic, and managing systems.

Switched PDUs are the gatekeepers of all the power that is fed to the rack. After all, everything needs power, right? Not only is the rack PDU the bridge between the data center’s entire electrical infrastructure and the devices that run the network, it also provides the nearest touchpoint to monitor and manage that power. Talk about up close and personal!

Monitoring the edge

IoT computing demands more sophisticated monitoring solutions at the rack and PDU level. By definition, edge compute sites are not adjacent to the core data center facility. Lack of proximity means that there is an increased reliance on the ability to monitor power and cooling conditions remotely, as well as the ability to remotely control and reboot single outlets. As IoT has pushed monitoring to the distant reaches of the network, intelligent PDUs have likewise been deployed to provide feedback and control.

Monitoring the core

Intelligent PDUs arguably play a more critical role at the core, thanks to IoT. They provide information about equipment operation by metering the input and output power at the PDU. They also provide remote control operations that allow you to turn power on and off to individual receptacles. Having a network connection allows the data center manager to enable or disable outlets from a remote location or within the facility itself. As IoT has required more flexibility and fewer maintenance windows, intelligent PDUs have stepped in to assist with controlling the computing environment.

Monitoring to manage

Increased data traffic and shifting workloads increase the complexity of the data center manager’s power and cooling resources within the facility. By using intelligent PDUs, you can access real-time usage data and environmental alerts. All power usage data is easily tracked, stored, and exported into reports using intelligent PDUs and DCIM software. By analyzing accurate power usage information at the cabinet level, data center managers are now able to more accurately shift power resources within the white space.

In short, an intelligent PDU can be the control your data center infrastructure needs to support IoT applications. This is increasingly important as this infrastructure is being pushed closer to the edge with even less time for maintenance. Higher device demand comes with higher power demands, which means more challenges to the network. PDUs help you meet them and anticipate the next IoT evolution.

Marc Cram is Director of New Market Development for Server Technology, a brand of Legrand (@Legrand). A technology evangelist, he is driven by a passion to deliver a positive power experience for the data center owner/operator. He earned a bachelor’s degree in electrical engineering from Rice University and has more than 30 years of experience in the field of electronics. Follow him on LinkedIn or @ServerTechInc on Twitter.

The 5 Best Practices in Data Center Design Expansion

By Mark Gaydos, Chief Marketing Officer, Nlyte Software

When it comes to managing IT workloads, it’s a fact that the more software tools there are, the more risk and complexity is introduced. Eventually, the management process becomes like a game of Jenga, touching a piece in the wrong manner can have an adverse reaction on the entire stack.

In the past, data center managers could understand all the operational aspects with a bit of intuitive knowledge plus a few spreadsheets. Now, large data centers can have millions, if not tens of millions of assets to manage. The telemetry generated can reach beyond 6,000,000 monitoring points. Over time, these points can generate billions of data units. In addition, these monitoring points are forecasted to grow and spread to the network’s edges and extend through the cloud. AFCOM’s State of the Data Center survey confirms this growth by finding that the average number of data centers per company represented was 12 and expected to grow to 17 over the next three years. Across these companies, the average number of data centers slated for renovation is 1.8 this year and 5.4 over the next three years.

Properly managing the IT infrastructure as these data centers expand is no game-of-chance; but there are some proven best practices to leverage that will ensure a solid foundation for years to come.

5 Must Adhere-To Designs for Data Center Expansion:

  1. Use a Data Center Infrastructure Management (DCIM) solution. As previously mentioned, intuition and spreadsheets cannot keep up with the changes occurring in today’s data center environment. A DCIM solution not only provides data center visualization, robust reporting and analytics but also becomes the central source-of-truth to track key changes.
  2. Implement Workflow and Measurable Repeatable Processes. The IT assets that govern workloads are not like Willy Wonka’s Everlasting Gobstopper—they have a beginning and end-of-life date. One of the key design best practices is to implement a workflow and repeatable business process to ensure resources are being maintained consistently and all actions are transparent, traceable and auditable.
  3. Optimize Data Center Capacity Using Analytics and Reporting. From the moment a data center is brought to life, it is constantly being redesigned. To keep up with these changes and ensure enough space, power and cooling is available, robust analytics and reporting are needed to keep IT staff and facility personnel abreast of current and future capacity needs.
  4. Automation. Automation is one of many operational functions that IT personnel perform. This helps to ensure consistent deployments across a growing data center portfolio, while helping to reduce costs and human error. In addition, automation needs to occur at multiple stages, from on-going asset discovery and software audits to workflow and cross-system integration processes.
  5. Integration. The billions of data units previously mentioned can be leveraged by many other operational systems. Integrate the core DCIM solution into other systems, such as building management systems (BMS), IT systems management (ITSM), and with virtualization management solutions such as VMware and Nutanix. Performing this integration will synchronize information so that all stakeholders in a company may benefit from a complete operational analysis.

Find a Complete Asset Management Tool

Technology Asset Management (TAM) software helps organizations understand and gain clarity as to what is installed, what services are being delivered and who is entitled to use it. Think of TAM as being 80% process and 20% technology. Whatever makes the 80% software process easier, will help the IT staff better manage all their software assets. From the data center to the desktop and from Unix to Linux, it does not make a difference—all organizations need visibility into what they have installed and who has access rights.

A good asset manager enables organizations to quickly and painlessly understand their entire user base, as well as the IT services and software versions being delivered. Having full visibility pays high dividends, including:

  • Enabling insights into regulatory environments such as GDPR requirements. If the IT staff understands what the company has, they can immediately link it back to usage.
  • Gaining cost reductions. Why renew licenses that are not being used? Why renew maintenance and support for items that the organization has already retired? Companies can significantly reduce costs by reducing licenses based on current usage.
  • Achieving confidence with software vendor negotiations. Technology Asset Management empowers organizations to know beyond a shadow of a doubt, what is installed and what is being used. Now the power is back in the company’s hands and not the software publishers.
  • Performing software version control. This allows companies to understand the entitlements, how this changes over time and who was using the applications. Software Asset Management allows for software metering to tell from the user’s perspective, who has, or needs to have, the licenses.

Accommodating Your Data Center Expansion

Complexity is all too often the byproduct of expanding data centers and it’s not subject to IT hardware and software only. To accommodate this expansion, facility owners are also seeking new types of power sources to offset OPEX. The AFCOM survey underscores the alternate energy expansion by finding that 42 percent of respondents have already deployed some type of renewable energy source or plan to over the next two months.

Selecting the Right IT Management Tool

Many IT professionals fall into the cadence of adding additional software and hardware to manage data center sprawl in all its forms, but this approach often leads to siloed containers and inevitably—diminishing returns from unshared data. When turning to software for an automated approach to gain more visibility and control over the additional devices and services connected, it’s important to carefully consider all integration points.

The selected tool needs to connect and combine with the intelligence of other standard infrastructure tools such as active directory and directory services for ownership and location. Additionally, the value of any new IT management tool that sums up the end-to-end compute system should be able to gather information utilizing virtually any protocol or if protocols are disabled or not available, and the baseline must have alternative methodologies to collect the required information.

IT Workloads are too Important to be Left to Chance

IT workloads are too important to be left to chance and managing data centers is not a game. Pinging individual devices at the top of the stack to obtain information only yields temporary satisfaction. There may be a devastating crash about to happen, but without knowing the stability of all dependencies—the processing tower could topple. Don’t get caught in a Jenga-type crisis. Help mitigate risks with management tools that offer intuitive insights up and down the stack.

Bio
As Chief Marketing Officer at Nlyte Software, Mark Gaydos leads worldwide marketing and sales development. He oversees teams dedicated to helping organizations understand the value of automating and optimizing how they manage their computing infrastructure.