Home » Data Center

Tag: Data Center

Does Your Edge Data Center Disaster Recovery Plan Include IoT Platforms?

By Michael C. Skurla, Chief Technology Officer, Radix IoT

From manufacturing, to healthcare, to smart city infrastructure, Edge computing frameworks are enabling business and consumer networks worldwide. About 75 billion IoT devices are expected to be online by 2025. An estimated 127 new devices come online every second.

The more connected devices, the higher demand for uninterrupted connectivity, and the more critical the role of Edge data centers, which, placed in closer vicinity to users, become enablers that expedite processing capability. As the next-gen data centers, Edge sites play a critical role in our always-on society. Especially as work and learning from home have become the norm, neither businesses nor consumers can afford downtime or latency. Low tolerance for failure or outages positions Edge data centers as facilities that serve essential infrastructure. These data centers are strategically located close to essential public or private enterprise applications and built out in networks parallel to larger data centers.

Set in remote, hard-to-reach areas, unmanned Edge data centers are mostly operated with software solutions – including IoT platforms, which enable remote management, triage, and monitoring of the sites. This is an ideal scenario to maintain operations amid disasters, or as proven during the current pandemic crisis.

Full interoperability with the existing systems allows IoT platforms to be seamlessly connected and up and running in no time. Whether managing one site or thousands of geographically distributed sites, IoT platforms allow operators to remotely manage and troubleshoot portfolios of locations from one central location.

IoT Platforms Offer Actionable Analytics

IoT platforms’ unified management dashboard allows operators to remotely monitor and manage critical facilities’ equipment across geographically distributed locations, maintaining business continuity from a single pane of glass. The data acquired from IoT platforms feeds runtime analytics, allowing for preventative maintenance to eliminate costly downtime. When an average network’s downtime can cost nearly $5,600 per minute or $300,000 per hour, according to Gartner, avoiding downtime is critical. With an IoT platform in place, operators receive risk notifications before major problems turn into costly, irreversible disasters. With the ability to divert risks in a timely manner using remote triage, operational costs are drastically lowered, while maintaining uninterrupted uptime. Securing their critical facility – by maintaining locks, alarms, HVAC, and other systems – operators maintain uptime while lowering OpEx costs.

With data aggregated, organized, and analyzed across all technologies and subsystems, operators can keep their facilities alive while turning aggregated data into useful, actionable, outcome-based analytics. Since IoT platforms are not tied to specific vendors’ equipment, they provide unprecedented flexibility in deployment design, and the ability to have generations of deployed infrastructure without fear of incompatibility.

IoT Platforms Are Vendor-Lock Free

IoT platforms approach data fundamentally differently: they consolidate data from all the subsystems that speak to each other, regardless of the brand or type of equipment. In consolidating and organizing data lakes, they rationalize various sizes and shapes of data from often different equipment manufacturers into a consistent source of truth across one or many facility sites. A single pane of glass enables full visibility and control.

IoT solutions’ true value is in the data delivery. They are not a one-size-fits-all monitoring application – they use the portfolio of data to allow highly flexible management that users can easily change to adapt to their specific business needs and requirements. Since they are vendor-agnostic and open-source, devices from different vendors seamlessly integrate without a vendor lock. This enables and synergizes a consolidated management of all the existing systems within facilities, without replacing BMS or purpose-built solutions for individual trades. Operators can easily adapt IoT platforms to fit their specific needs of monitoring or control requirements. While previously actionable analytics were unavailable by a single trade’s data silo, now operators can seamlessly connect the consolidated dataset to BI engines, external micro-service analytics, or software tools.

IoT Platforms Unify Data Management Across Your Entire Infrastructure

As social distancing guidelines will most likely continue for the foreseeable immediate future, critical facility operators continue to maintain the integrity of their facility remotely, keeping staff safe. And IoT platforms continue to be indispensable tools that enable this remote monitoring and managing of critical infrastructure. With most average-sized facilities having 10 or more subsystems (HVAC, electrical, wi-fi, water, lighting control, etc.) that produce data, it is no longer a choice to disregard trapped data in siloed ecosystems of the individual trade – or worse yet, not have them connected to solutions that allow for data harvesting.

IoT platforms are the safest, most valuable unifying layer over any existing infrastructure, as they continue to provide real-time access to actionable data while enabling remote, cost-effective facilities’ monitoring and control. As Edge data centers proliferate, IoT platforms’ ability for rapid provisioning of newly deployed facilities globally position the solution as the most comprehensive for pandemic-proofing the management and monitoring of critical facilities.

Bio

Michael C. Skurla is the Chief Technology Officer of Radix IoT– offering limitless monitoring and management rooted in intelligence – and has over two decades’ experience in control automation and IoT product design with Fortune 500 companies. He is a contributing member of CABA, ASHRAE, IES Education, and USGBC and a frequent lecturer on the evolving use of analytics and emerging IT technologies to foster efficiency within commercial facility design.

IoT’s Impact on the Data Center and the Role of Intelligent Power

By Marc Cram, Director of New Market Development, Server Technology

Once dubbed the next Industrial Revolution, the Internet of Things (IoT) has proven to be the movement that will drive the evolution of network, IT, and data center design into the future. To sum up the net impact of all of the new devices situated at the edge of all of the networks, consider this: there will be some 24 billion Internet of Things devices online by the end of 2020, which is actually more than double the 10 billion devices that will be used directly by people. Intelligent PDUs will play a critical role in the management of networks that support that traffic.

In fact, IoT has had a number of impacts on data center infrastructure, as well as data center services. Not only has IoT driven the creation of more robust networks and IT systems, it has also pushed the boundaries of what was previously understood as cloud and edge computing, and the networks that support those systems.

Lean and mean

When we look at the impact of IoT on data center infrastructure, the greatest tangible effect has been on data center networks. Most facilities have had to adapt in order to keep up with IoT—especially 5G IoT. This has meant an increase in the number of connections and in the overall speed of networks in most deployments, even ones that lean heavily on edge computing. Those edge devices still need to push data back to a central hub for more detailed computing and analysis.

Because of this, the majority of data centers are upping their networking and connectivity game. Another key impact IoT brings to data centers is a different type of capacity demand. IoT devices are continually running and delivering data, meaning that many data centers now have a much smaller window than before to take a network offline or make adjustments. Traditional maintenance windows are now closed, and network architectures have to be adapted to support uptime. The impact on data center infrastructure? It needs to be equally flexible.

More secure

An unexpected impact of IoT on data centers has been the need for an increased security presence at the edge. This new security challenge is the unwanted passenger on the train of network safety. It is the result of having more passengers on the new IoT touchpoints and endpoints.

This increase in the number of devices has presented a unique challenge for those in charge of their company’s networks. The proliferation of traffic has meant that companies are investing in new tools to monitor and manage traffic on their networks. While these tools are mostly in the form of software and IT appliances, there has also been an increase in the adoption of network PDUs.

Everything needs power

While they may seem like an unlikely player in new IoT data center infrastructures, intelligent PDUs are serving a key role in securing networks, supporting uptime, monitoring traffic, and managing systems.

Switched PDUs are the gatekeepers of all the power that is fed to the rack. After all, everything needs power, right? Not only is the rack PDU the bridge between the data center’s entire electrical infrastructure and the devices that run the network, it also provides the nearest touchpoint to monitor and manage that power. Talk about up close and personal!

Monitoring the edge

IoT computing demands more sophisticated monitoring solutions at the rack and PDU level. By definition, edge compute sites are not adjacent to the core data center facility. Lack of proximity means that there is an increased reliance on the ability to monitor power and cooling conditions remotely, as well as the ability to remotely control and reboot single outlets. As IoT has pushed monitoring to the distant reaches of the network, intelligent PDUs have likewise been deployed to provide feedback and control.

Monitoring the core

Intelligent PDUs arguably play a more critical role at the core, thanks to IoT. They provide information about equipment operation by metering the input and output power at the PDU. They also provide remote control operations that allow you to turn power on and off to individual receptacles. Having a network connection allows the data center manager to enable or disable outlets from a remote location or within the facility itself. As IoT has required more flexibility and fewer maintenance windows, intelligent PDUs have stepped in to assist with controlling the computing environment.

Monitoring to manage

Increased data traffic and shifting workloads increase the complexity of the data center manager’s power and cooling resources within the facility. By using intelligent PDUs, you can access real-time usage data and environmental alerts. All power usage data is easily tracked, stored, and exported into reports using intelligent PDUs and DCIM software. By analyzing accurate power usage information at the cabinet level, data center managers are now able to more accurately shift power resources within the white space.

In short, an intelligent PDU can be the control your data center infrastructure needs to support IoT applications. This is increasingly important as this infrastructure is being pushed closer to the edge with even less time for maintenance. Higher device demand comes with higher power demands, which means more challenges to the network. PDUs help you meet them and anticipate the next IoT evolution.

Marc Cram is Director of New Market Development for Server Technology, a brand of Legrand (@Legrand). A technology evangelist, he is driven by a passion to deliver a positive power experience for the data center owner/operator. He earned a bachelor’s degree in electrical engineering from Rice University and has more than 30 years of experience in the field of electronics. Follow him on LinkedIn or @ServerTechInc on Twitter.

Unmanned Edge Operations Are the Future

By Michael C. Skurla, Chief Technology Officer, BitBox USA

The growth of edge is an interesting phenomenon. The rise of edge computing closed the IT infrastructure gap with edge data center deployments. The rise of public cloud and centralized computing paved the way to hybrid cloud and decentralized computing. However, within a distributed infrastructure, the IT ecosystem demands a mix of telecom and web services.

Whether on-premise, or closer to end-users, edge computing complements the current public cloud or colocation deployments.

The increased demand for connectivity-driving data proliferation positions IoT’s critical role as an edge enabler. But adding more “client” devices to networks isn’t the only role of IoT within an edge ecosystem. The often-overlooked side is for the required IoT technology to enable edge operations.

While cloud computing shifted the data center to a third-party network operations center (NOC), it didn’t eliminate on-premise data center operators who manage and respond to facility problems. Edge introduced a new challenge to network operations: autonomous management with limited access to the individuals who are local to equipment to address problems or perform maintenance. The new norm does not have in-house IT staff, equipment and machines under one or several roofs. It distributes data center operations into thousands of smaller facilities, most of which are not readily accessible in a short drive or walk.

Describing the edge as, “the infrastructure topology that supports the IoT applications,” Jeffrey Fidacaro, Senior Analyst for 451 Research Data Centers, underscores the importance of building a “unified edge/IoT strategy” that taps into multiple infrastructure options to manage the onslaught of IoT and facility systems while dealing with the needs of constant change.

Interestingly, the platforms around IoT solutions, not the hardware itself, are the answer to this quandary. Based on IT standards, IoT sensing and monitoring hardware offers granular, a la carte-style monitoring solutions. These solutions are often easy-to-install, flexible form-factor hardware packages that equip small sites, from shelters down to small electrical enclosures. Since these devices offer a multitude of functions and data points, they make reliable and remote facility management possible.

For instance, the sensing technology of ServersCheck allows granular site data to be generated from hardware, which complements an IoT platform that allows large amounts of sites to be monitored in concert while also tying in more complex control sub-systems such as HVAC, generators, access control, and surveillance equipment. These IoT platforms expand monitoring and remote management to a global scale, allowing customized alarming, reporting, dashboarding, and more, for a geographically distributed portfolio of locations.

This style of IoT management solution allows a flexible, customized design for each site. Its scalable infrastructure reduces the need for NOCs to monitor multiple separate software packages to determine conditions at each site. This facilitates rapid remote diagnostics and a triage of problems before dispatching staff to remedy issues.

Edging to Cellular Levels

Telecommunications keeps pushing further to the edge. In particular, remote monitoring is more crucial than ever, with the planned 5G rollout that ensures rapid growth of small-cell technology piggybacking on shared infrastructure such as streetlights, utility poles, and existing buildings.

As wireless transmitters and receivers, small-cell technology design allows network coverage to smaller sites and areas. Compared to the tall cell towers enabling strong network signals across vast distances, small cells are ideal for improving the cellular connectivity of end-users in densely developed areas. They play a crucial role in addressing increased data demands in centralized locations.

The rapid scalability of small cell technology can not only meet the demands of 4G networks, but can also easily adapt to 5G rollouts to expedite connectivity functions closer to the end-users. In clustered areas, small-cell technology allows for far superior connectivity, penetrating dense areas, and in-building sites.

Consider small-cell technology as the backbone of the fourth industrial revolution. Enabling the transmission of signals for transmitting even greater amounts of data at higher speeds, small-cell technology empowers IoT devices to receive and transmit far greater amounts of data. It also enables 5G technology, given the density requirements of the technology.

Enterprises face a flood of data from IoT connectivity. In fact, Cisco estimates this data flood to reach 850 zettabytes by 2021. This is driving edge buildouts of all sizes and shapes. To accomplish this, edge operators must rethink how they manage and monitor this explosion of sites. IoT platforms have proven to have the scalability and flexibility to take on this challenge in a highly affordable way.

As Forrester research predicted, “the variety of IoT software platforms has continued to grow and evolve to complement the cloud giants’ foundation IoT capabilities rather than compete with them” and it expects the IoT market to continue to see dramatic and rapid change in coming years.

It’s time for the technology that edge is being built to support – IoT – to play a role in managing the critical infrastructure that enables it. IoT platforms can tie the knot for this marriage.

Bio

Michael C. Skurla is the Chief Technology Officer for BitBox USA, providers of the BitBox IoT platform for multi-site, distributed facilities’ operational intelligence, based in Nashville, Tennessee. Mike’s in-depth industry knowledge in control automation and IoT product design sets cutting-edge product strategy for the company’s award-winning IoT platform.

Innovation Comes from Listening to Customer Needs

By Sandi Renden, Director of Marketing, Server Technology

Product improvements are not solely the result of product management influences.

In many cases, the most innovative products are the result of customer feedback and they are often the most successful. To remain relevant, products must be in lockstep with customers’ changing needs to enhance their experiences. And data centers are a perfect example of an industry that must constantly adapt to change.

Why is the data center industry a good example? Because workloads need elastic processing abilities, servers have gone virtual and networks are sprawling at the edges. As this continues to happen, the power required to run these environments must be as flexible as their hardware and software counterparts. Intelligent rack power distribution unit (PDU) manufacturer, Server Technology, knows all too well how fast data center power requirements can quickly decrease a product’s usefulness when it comes to supporting changing rack devices. However, they also have a history of circumventing this unfortunate situation and exceling where other PDU and rack mount power strip manufacturers often struggle and sometimes fail.

Marc Cram, Director of New Market Development for Server Technology, shares some insights into how his company is able to quickly pivot product manufacturing and redesign data center PDUs to fit today’s elastic workload environments. Spoiler alert: their success comes from listening to their customers and allowing them to design their own PDUs.

Turning Pain into Gain

Where do good inventions truly come from? Willy Wonka states the secret to inventing is, “93% perspiration, 6% electricity, 4% evaporation and 2% butter-scotch ripple.” Although this may be practical for creating Everlasting Gobstoppers, in the data center environment, game-changing inventions are predicated on more simplistic methods. And perhaps the simplest, but successful stimulus for inventors comes from listening to customers’ pain points.

Cram says that Server Technology was founded by listening to customers and figuring out how to satisfy as many of their power needs—with a single PDU design. “It’s a tradition that continues to this very day; we still do leading-edge work for our customers by listening to their specific needs and turning that information into targeted products for their exact applications,” he says.

Cram understood that data centers were traditionally built in a raised floor environment and the IT managers were in the same facility. This situation made it easier for managers to frequently replace rack mount power strips as servers were swapped out. As time went by and the data center industry evolved, servers and racks were not necessarily located on the same premises where the IT managers were residing. Listening to customers, it became clear that having the ability to remotely read the rack power status made a tremendous amount of sense and alleviated the pain of traveling between data centers to read or reset PDU devices.

However, all customer needs are not created equal and some organizations did not want remote management capabilities. Rather, they voiced the need for PDUs to be equipped with alarm capabilities instead. “Banks are a good example of this,” Cram said. “The last thing a bank wants is for somebody to come in and turn off their rack power supply that just happens to be processing someone’s ATM transaction. You don’t ever want it to be interrupted.”

The Difference Between Hearing and Listening

Hearing is the act of perceiving auditory sounds versus listening, which is the act of paying attention to sounds and giving them consideration. Listening to customers allowed Server Technology to jump directly to a Switched PDU from a basic, unmanaged PDU. Cram says that by listening to customers, the company discovered a need for a power strip that had remote monitoring capabilities, but also provided individual outlet controls. Cram noted, “this is where the ‘smarts’ in our products came from.”

A similar listening/consideration process was also undertaken when Server Technology developed outlet power sensing. It was learned that customers like the per-outlet sensing capabilities, but they did not like the control. With this information, the company created smart PDU options. Now, Server Technology is offering five different PDU levels: Basic (power in/power out) Metered, Switched, Smart Per Outlet Power Sensing (Smart POPS) and Switched POPs.

The flexibility of five distinct data center PDU offerings, as well as the High-Density Outlet Technology (HDOT) line of PDUs, decreases the need to go back and reconfigure rack power when new devices are added. Cram says that “whether the need is for a full rack-of-gear or a rack that starts its life with three servers and a switch then eventually is used for some other configuration, Server Technology’s family of PDUs can handle the entire transition.”

The innovation behind the HDOT and the HDOT Cx resides in the ability that enables customers to select what outlet types they want as well as have them placed in the desired location on the PDU.  “You can reconfigure the rack to plug in a different device into the same CX outlet,” Cram says. For example, a customer populated a rack with 1u height servers with C13 outlets. Using the HDOT Cx would give them the ability to remove servers and add a high-end Cisco router or another big-power device that requires C19 outlets. The HDOT Cx outlet provides the flexibility they need without throwing away the original PDU.

Perhaps the ultimate result of listening to customers’ concerns comes in the ability Server Technology has given its customers to actually “build your own PDUs” or BYOPDU. This power strip innovation provides a website where customers may configure the exact type of outlets needed, based upon the PDU’s intent and initial use. By specifying the CX modules, the customer has extreme flexibility and the opportunity to extend the life and usability of each power strip.

Listening, Not Hearing, Pays Dividends

Customer feedback is one of the greatest sources of product inspiration and listening is a skill that needs to be developed to ensure useful evolution. Incorporating feedback to advance products will benefit entire industries—and creating a perpetual feedback/innovation loop ensures a steady stream of improvements. Aside from the flexible HDOT PDU family, Server Technology also developed other PDUs that distribute 415VAC, 480VAC or 380VDC—all in response to customer feedback and customer needs. “In an industry where rigidity breeds stagnation and stagnation impedes a data center’s ability to efficiently process workloads, customers’ voices are the inventor’s greatest ally,” Cram concluded.

Bio
Sandi Terry Renden is Director of Marketing at
Server Technology, a brand of Legrand in the Datacenter Power and Control Division. Sandi is a passionate leader and creative visionary, with over 25 years of management, digital marketing and sales execution experience, with a proven track record of success recruiting and retaining talent, hitting sales targets and developing multi-channel digital marketing and branding campaigns for non-profit and profit organizations in both B2B and B2C. She has international working and cultural (residency) experience on three continents (Americas, Asia and Europe.) Sandi has earned a BA in Marketing from the University of Utah and an MBA in Marketing.

The 5 Best Practices in Data Center Design Expansion

By Mark Gaydos, Chief Marketing Officer, Nlyte Software

When it comes to managing IT workloads, it’s a fact that the more software tools there are, the more risk and complexity is introduced. Eventually, the management process becomes like a game of Jenga, touching a piece in the wrong manner can have an adverse reaction on the entire stack.

In the past, data center managers could understand all the operational aspects with a bit of intuitive knowledge plus a few spreadsheets. Now, large data centers can have millions, if not tens of millions of assets to manage. The telemetry generated can reach beyond 6,000,000 monitoring points. Over time, these points can generate billions of data units. In addition, these monitoring points are forecasted to grow and spread to the network’s edges and extend through the cloud. AFCOM’s State of the Data Center survey confirms this growth by finding that the average number of data centers per company represented was 12 and expected to grow to 17 over the next three years. Across these companies, the average number of data centers slated for renovation is 1.8 this year and 5.4 over the next three years.

Properly managing the IT infrastructure as these data centers expand is no game-of-chance; but there are some proven best practices to leverage that will ensure a solid foundation for years to come.

5 Must Adhere-To Designs for Data Center Expansion:

  1. Use a Data Center Infrastructure Management (DCIM) solution. As previously mentioned, intuition and spreadsheets cannot keep up with the changes occurring in today’s data center environment. A DCIM solution not only provides data center visualization, robust reporting and analytics but also becomes the central source-of-truth to track key changes.
  2. Implement Workflow and Measurable Repeatable Processes. The IT assets that govern workloads are not like Willy Wonka’s Everlasting Gobstopper—they have a beginning and end-of-life date. One of the key design best practices is to implement a workflow and repeatable business process to ensure resources are being maintained consistently and all actions are transparent, traceable and auditable.
  3. Optimize Data Center Capacity Using Analytics and Reporting. From the moment a data center is brought to life, it is constantly being redesigned. To keep up with these changes and ensure enough space, power and cooling is available, robust analytics and reporting are needed to keep IT staff and facility personnel abreast of current and future capacity needs.
  4. Automation. Automation is one of many operational functions that IT personnel perform. This helps to ensure consistent deployments across a growing data center portfolio, while helping to reduce costs and human error. In addition, automation needs to occur at multiple stages, from on-going asset discovery and software audits to workflow and cross-system integration processes.
  5. Integration. The billions of data units previously mentioned can be leveraged by many other operational systems. Integrate the core DCIM solution into other systems, such as building management systems (BMS), IT systems management (ITSM), and with virtualization management solutions such as VMware and Nutanix. Performing this integration will synchronize information so that all stakeholders in a company may benefit from a complete operational analysis.

Find a Complete Asset Management Tool

Technology Asset Management (TAM) software helps organizations understand and gain clarity as to what is installed, what services are being delivered and who is entitled to use it. Think of TAM as being 80% process and 20% technology. Whatever makes the 80% software process easier, will help the IT staff better manage all their software assets. From the data center to the desktop and from Unix to Linux, it does not make a difference—all organizations need visibility into what they have installed and who has access rights.

A good asset manager enables organizations to quickly and painlessly understand their entire user base, as well as the IT services and software versions being delivered. Having full visibility pays high dividends, including:

  • Enabling insights into regulatory environments such as GDPR requirements. If the IT staff understands what the company has, they can immediately link it back to usage.
  • Gaining cost reductions. Why renew licenses that are not being used? Why renew maintenance and support for items that the organization has already retired? Companies can significantly reduce costs by reducing licenses based on current usage.
  • Achieving confidence with software vendor negotiations. Technology Asset Management empowers organizations to know beyond a shadow of a doubt, what is installed and what is being used. Now the power is back in the company’s hands and not the software publishers.
  • Performing software version control. This allows companies to understand the entitlements, how this changes over time and who was using the applications. Software Asset Management allows for software metering to tell from the user’s perspective, who has, or needs to have, the licenses.

Accommodating Your Data Center Expansion

Complexity is all too often the byproduct of expanding data centers and it’s not subject to IT hardware and software only. To accommodate this expansion, facility owners are also seeking new types of power sources to offset OPEX. The AFCOM survey underscores the alternate energy expansion by finding that 42 percent of respondents have already deployed some type of renewable energy source or plan to over the next two months.

Selecting the Right IT Management Tool

Many IT professionals fall into the cadence of adding additional software and hardware to manage data center sprawl in all its forms, but this approach often leads to siloed containers and inevitably—diminishing returns from unshared data. When turning to software for an automated approach to gain more visibility and control over the additional devices and services connected, it’s important to carefully consider all integration points.

The selected tool needs to connect and combine with the intelligence of other standard infrastructure tools such as active directory and directory services for ownership and location. Additionally, the value of any new IT management tool that sums up the end-to-end compute system should be able to gather information utilizing virtually any protocol or if protocols are disabled or not available, and the baseline must have alternative methodologies to collect the required information.

IT Workloads are too Important to be Left to Chance

IT workloads are too important to be left to chance and managing data centers is not a game. Pinging individual devices at the top of the stack to obtain information only yields temporary satisfaction. There may be a devastating crash about to happen, but without knowing the stability of all dependencies—the processing tower could topple. Don’t get caught in a Jenga-type crisis. Help mitigate risks with management tools that offer intuitive insights up and down the stack.

Bio
As Chief Marketing Officer at Nlyte Software, Mark Gaydos leads worldwide marketing and sales development. He oversees teams dedicated to helping organizations understand the value of automating and optimizing how they manage their computing infrastructure.