Category: Featured Article

The 5 Best Practices in Data Center Design Expansion

By Mark Gaydos, Chief Marketing Officer, Nlyte Software

When it comes to managing IT workloads, it’s a fact that the more software tools there are, the more risk and complexity is introduced. Eventually, the management process becomes like a game of Jenga, touching a piece in the wrong manner can have an adverse reaction on the entire stack.

In the past, data center managers could understand all the operational aspects with a bit of intuitive knowledge plus a few spreadsheets. Now, large data centers can have millions, if not tens of millions of assets to manage. The telemetry generated can reach beyond 6,000,000 monitoring points. Over time, these points can generate billions of data units. In addition, these monitoring points are forecasted to grow and spread to the network’s edges and extend through the cloud. AFCOM’s State of the Data Center survey confirms this growth by finding that the average number of data centers per company represented was 12 and expected to grow to 17 over the next three years. Across these companies, the average number of data centers slated for renovation is 1.8 this year and 5.4 over the next three years.

Properly managing the IT infrastructure as these data centers expand is no game-of-chance; but there are some proven best practices to leverage that will ensure a solid foundation for years to come.

5 Must Adhere-To Designs for Data Center Expansion:

  1. Use a Data Center Infrastructure Management (DCIM) solution. As previously mentioned, intuition and spreadsheets cannot keep up with the changes occurring in today’s data center environment. A DCIM solution not only provides data center visualization, robust reporting and analytics but also becomes the central source-of-truth to track key changes.
  2. Implement Workflow and Measurable Repeatable Processes. The IT assets that govern workloads are not like Willy Wonka’s Everlasting Gobstopper—they have a beginning and end-of-life date. One of the key design best practices is to implement a workflow and repeatable business process to ensure resources are being maintained consistently and all actions are transparent, traceable and auditable.
  3. Optimize Data Center Capacity Using Analytics and Reporting. From the moment a data center is brought to life, it is constantly being redesigned. To keep up with these changes and ensure enough space, power and cooling is available, robust analytics and reporting are needed to keep IT staff and facility personnel abreast of current and future capacity needs.
  4. Automation. Automation is one of many operational functions that IT personnel perform. This helps to ensure consistent deployments across a growing data center portfolio, while helping to reduce costs and human error. In addition, automation needs to occur at multiple stages, from on-going asset discovery and software audits to workflow and cross-system integration processes.
  5. Integration. The billions of data units previously mentioned can be leveraged by many other operational systems. Integrate the core DCIM solution into other systems, such as building management systems (BMS), IT systems management (ITSM), and with virtualization management solutions such as VMware and Nutanix. Performing this integration will synchronize information so that all stakeholders in a company may benefit from a complete operational analysis.

Find a Complete Asset Management Tool

Technology Asset Management (TAM) software helps organizations understand and gain clarity as to what is installed, what services are being delivered and who is entitled to use it. Think of TAM as being 80% process and 20% technology. Whatever makes the 80% software process easier, will help the IT staff better manage all their software assets. From the data center to the desktop and from Unix to Linux, it does not make a difference—all organizations need visibility into what they have installed and who has access rights.

A good asset manager enables organizations to quickly and painlessly understand their entire user base, as well as the IT services and software versions being delivered. Having full visibility pays high dividends, including:

  • Enabling insights into regulatory environments such as GDPR requirements. If the IT staff understands what the company has, they can immediately link it back to usage.
  • Gaining cost reductions. Why renew licenses that are not being used? Why renew maintenance and support for items that the organization has already retired? Companies can significantly reduce costs by reducing licenses based on current usage.
  • Achieving confidence with software vendor negotiations. Technology Asset Management empowers organizations to know beyond a shadow of a doubt, what is installed and what is being used. Now the power is back in the company’s hands and not the software publishers.
  • Performing software version control. This allows companies to understand the entitlements, how this changes over time and who was using the applications. Software Asset Management allows for software metering to tell from the user’s perspective, who has, or needs to have, the licenses.

Accommodating Your Data Center Expansion

Complexity is all too often the byproduct of expanding data centers and it’s not subject to IT hardware and software only. To accommodate this expansion, facility owners are also seeking new types of power sources to offset OPEX. The AFCOM survey underscores the alternate energy expansion by finding that 42 percent of respondents have already deployed some type of renewable energy source or plan to over the next two months.

Selecting the Right IT Management Tool

Many IT professionals fall into the cadence of adding additional software and hardware to manage data center sprawl in all its forms, but this approach often leads to siloed containers and inevitably—diminishing returns from unshared data. When turning to software for an automated approach to gain more visibility and control over the additional devices and services connected, it’s important to carefully consider all integration points.

The selected tool needs to connect and combine with the intelligence of other standard infrastructure tools such as active directory and directory services for ownership and location. Additionally, the value of any new IT management tool that sums up the end-to-end compute system should be able to gather information utilizing virtually any protocol or if protocols are disabled or not available, and the baseline must have alternative methodologies to collect the required information.

IT Workloads are too Important to be Left to Chance

IT workloads are too important to be left to chance and managing data centers is not a game. Pinging individual devices at the top of the stack to obtain information only yields temporary satisfaction. There may be a devastating crash about to happen, but without knowing the stability of all dependencies—the processing tower could topple. Don’t get caught in a Jenga-type crisis. Help mitigate risks with management tools that offer intuitive insights up and down the stack.

Bio
As Chief Marketing Officer at Nlyte Software, Mark Gaydos leads worldwide marketing and sales development. He oversees teams dedicated to helping organizations understand the value of automating and optimizing how they manage their computing infrastructure.

Edge Infrastructure Meets Commercial Property

By Michael C. Skurla, Director of Product Strategy, BitBox USA

Until very recently buildings had tacked on the Smart prefix with promises of solving issues that we didn’t care about or hadn’t dreamed up yet. Such things as personal control solutions, even personal temperature control cropped up. Much of these smart extras were woven with grand marketing promises of future enabled ecosystems.

At the heart of the matter, the building industry has been highly proprietary and fractured, with each solution competing for monetary attention along with numerous other building trades on any construction budget. Lighting, security, irrigation, elevators, HVAC, wayfinding and dozens more; each vying for core competency and right to play – with a desire to gain revenue share of a fixed size pie of a commercial building budget.

With the introduction of IoT in the marketing envelope, a new lease was offered on an old marketing game in commercial buildings. There was a twist, however, since commercial building companies were somewhat behind the times compared to residential buildings. People had already enabled their home or at least embraced the advantages of Connected things in their lives powered by platforms such as Alexa, Siri and Nest by Google.

Commercial Building Gaps

Building management system (BMS) companies were exceptional at adapting to the game. With a core competency grown out of HVAC, BMS solutions superbly integrate and manage HVAC, and are sudo-network based. They naturally expanded their scope using their existing frameworks as a base and excelled at building automation. However, given the siloed building industry as a whole, they still lacked the expectations residential building occupants demanded in their homes and personal lives, while meeting the newly evolved demands of commercial buildings. These gaps included:

Scale – Managing one large building is one thing but managing hundreds under one portfolio is challenging – with existing building management frameworks being cost prohibitive on a scale.

Synergy – BMSes are good at control, but lack leveraging inter-system data sets. Hence, although the BMS may control lots of systems simultaneously, it lacks (without the customization) the learning and interacting process between the systems to leverage sensing platforms for greater efficiency and insight.

Micro-Analytics – While BMSes enable facility management and facility trades analytics, they could not be used beyond the facility context. The data remained private in the context of building infrastructure.

Commercial solutions needed a new methodology to address these needs, and the IT space, ironically already had it.

Enter IoT platforms.

For years, Edge Data Centers faced the same struggle as the market matured. It’s important to note that Edge data solutions are deployed en-masse – in the hundreds or thousands across large swaths of geography. Much of the same infrastructure used in commercial buildings, such as HVAC, security and power monitoring, are similar between traditional and Edge deployments. What differs, however, is the sheer quantity, technological diversity and geographic swath. Staffing these edge locations 24×7 is impractical hence the operations must be monitored and managed entirely remotely; while also using this monitoring technology to take on tasks typically handled by on-premises staff.

IoT Platforms Offer Scale

Unlike BMSes and SCADA systems of the past, IoT platforms at their core are built with the concept of diverse data at massive scale, and with the simplicity of installation and growth. Instead of relying on onsite commissioning and often custom programming to bridge the hardware, IoT platforms natively extract data from dozens of in-building protocols and subsystems. They also normalize the data and move it to a cloud location. Additionally, the setup of these solutions is vastly nimbler and generally consists of an Edge Appliance, wired and connected to a port that allows communications with a cloud infrastructure. Everything is then provisioned, managed and monitored remotely from the cloud – making this a perfect solution not only for Edge Data Centers, but the likes of commercial building portfolios.

IoT Platforms Bring Synergy

Given the number of subsystems in a building and the growing number of technologies and IoT sensing devices, there is an exceptional opportunity to leverage data between diverse systems. There is a significant amount of redundancy in these building trades in the way of sensing, which makes this technology, when viewed holistically, expensive to install and maintain. A prime example is evident in the simplicity of an office building meeting room where most likely there are three occupancy sensors detecting if someone is there. One for temperature control, another for lighting and security, and a third for a room reservation system. But why can’t one sensor provide all this data? Each of these requires wiring, programming and a separate system to monitor.

Beyond this, there is a strong case for external data to be applied and combined with in-building data for AI-related functionality. Google Maps for traffic information, external business databases, Twitter feeds; the sky is the limit.

IoT Platforms Enable Micro-Analytics

With all of this data collected in the cloud from a portfolio of sites, the data’s value is worth significantly more to the emerging field of Micro-service Analytics. These analytics services and visualization engines tap organized data-lakes, such as those provided by the IoT platform, and transform them into context-specific outcomes. Here are some possible scenarios:

  • Building data making actionable recommendations on building performance to reduce energy spend
  • IWMS (Integrated Workplace Management Systems) using the same data to analyze space utilization and recommend leasing adjustments
  • Retail marketing engines analyzing traffic patterns for merchandising

The analytics possibilities are endless through an ever-expanding marketplace of third-party micro-service companies, all enabled by the IoT platform and offering a consolidated API as a single source of “data” truth.

An Edge Site as a Commercial Building

IoT Platforms in the commercial property sector aggregate what is already integrated into buildings on a scale, to allow the building technologies to become alive as part of the business, and less of what is seen as a necessary evil of simple facility maintenance.

Traditional building technology solutions have met the mark on improving facility performance from an operation and maintenance perspective. It is time to move beyond this, however. Facility data can be used for considerably greater purposes to generate meaningful outcomes beyond the physical building when integrated with the breadth of other system data commonly referred to as “business operational information”.

This mix of information availability opens doors to analytics and visualization data, driven by analytics that has wide-reaching potential to have implications on the enterprises’ top and bottom line.

This certainly does not advocate an end to SCADA or BMS solutions, quite the contrary. The IoT platforms perform vital control and operations of some subsystems that should neither be duplicated nor replaced. An IoT platform layered on top of the systems discussed enhances the performance competency of the traditional silos of their core functionality to the best of their trade ability. This is done while leveraging the analysis of every bit of data, from vastly different angles, to impact the greater business good beyond just the facility sector.

In our fast-paced digitized infrastructure world merging various systems is critical to allow for profitable outcomes while enabling facility operators and managers to confidently make data-driven business decisions.

Bio
Michael C. Skurla is Director of Product Strategy for BitBox USA, which offers a single, simple and secure IoT platform solution for enterprises to collect, organize and deliver distributed data sets from critical infrastructure with a simple-to-deploy Edge Appliance with secure cloud access.

Seamless Tenant-to-Tenant Migrations Through Coexistence

By Kelsey Epps, Senior Technical Partner Strategist, BitTitan

There’s no question that businesses have adopted the cloud, big-time. In fact, Reuters reports that Microsoft has been shifting its reliance from the Windows operating system toward selling cloud-based services. Revenues have topped $1 trillion as the software giant predicts even more cloud growth.

Now that businesses have moved so many of their key workloads out of on-premises servers and shifted them into the cloud, the great wave of on-prem-to-cloud migrations is past its peak. With the cloud so well-entrenched, IT departments and service providers are being asked more and more to migrate workloads from one cloud instance to another. There are a variety of business reasons for making such a move, whether it’s employee preferences for a given software stack, realigning contracts, or utilizing APIs that are a better business fit.

It would seem that once a set of workloads is in the cloud, moving them to another cloud instance should be a straightforward process. However, ensuring business continuity through a cloud-to-cloud migration is every bit as tricky as an on-prem-to-cloud move.

In fact, now that workers are enjoying the work-from-anywhere access that the cloud provides, they may even be less tolerant or forgiving of any interruption in their user experience. When workers expect uninterrupted data access and seamless collaboration through the transition, the “Big Bang” approach of migrating everything in a single sequence, user-by-user or workload-by-workload until the job is done is rarely an option. Organizations are increasingly turning to a batched approach with their migrations, which targets specific groups or departments to migrate at the most opportune times.

This approach offers many benefits, but also its own challenges, because when a batched approach is taken, end users will exist on both the Source and Destination. This is where tenant-to-tenant coexistence comes into play to help facilitate the move.

Tenant-to-tenant migrations defined

A tenant-to-tenant (T2T) migration is a form of cloud-to-cloud migration where the Source and Destination applications are the same; the move is from one instance of the applications to another instance of the same applications. In the case of Office 365, the scope of applications and supporting data typically includes mailboxes, personal archives or personal storage tables (PST files), OneDrive or SharePoint files, and of course, the data files associated with the various Office 365 applications.

Migrating a business (or a subset of one) is a challenge because of the heavy reliance on email communications and calendars. Users have no way of knowing who among their coworkers have migrated to the Destination and who have yet to do so.

What is the impact of this? Emails bounce back to the sender or pile up in a mailbox that’s no longer accessible. Meeting invites are missed, or users are erroneously double-booked because the free/busy information associated with their calendars is no longer available to all users, as some are still working from the Source and others from the Destination. These obstacles work against the primary goal that the IT team brings to any migration: to make the whole process seamless and essentially invisible to the users.

Continuous collaboration through coexistence

Coexistence is a migration technique that gets around the synchronization problems and keeps users happily working and collaborating with each other even though they’re being migrated at different times. When a migration is the result of a merger, acquisition or divestiture, an entire organization, department or division is moving from SourceCompany.com to DestinationCompany.com. It’s the ideal scenario for taking advantage of coexistence. All one has to do is follow these easy steps:

  • First, enable organizational sharing of the Office 365 tenants. For all users to be migrated, create mail-enabled contacts on the Destination that resolve to each individual’s mailbox on the Source.
  • As you migrate each user, remove the mail-enabled contact from the Destination. Create an Office 365-licensed user account to establish the new mailbox, with a forward that points back to the Source mailbox. This allows the user to keep working in the Source mailbox. Migrate the mailbox items from the Source to the Destination.
  • Finally, after you migrate each user, remove the forward on the Destination mailbox. On the Source, you can remove the mailbox and replace it with a mail-enabled contact that points to the Destination mailbox. Or, keep the mailbox in place and forward to the new Destination.

Plan ahead for swift execution

Coexistence is an effective technique, especially if you combine it with selective migration of older files or emails that are less likely to be needed and move them either before or after the active migration. This enables you to make the whole process quick and seamless. Put coexistence in your toolkit and use it the next time you’re faced with a tenant-to-tenant migration between domains. Of course, careful preplanning is the key — as it is with any migration.

Bio
Kelsey Epps is a senior technical partner strategist with
BitTitan. A 20-year IT industry veteran, Kelsey works with MSPs and IT specialists on the technical preplanning aspects of the most complex migrations projects.

Choosing a Managed Service Provider – How to Make the Case

By Al Alper, CEO, CyberGuard360 and Absolute Logic

Cyberattack! It’s the word that strikes fear in the heart of every business owner. By now, most business owners are aware of the basic measures needed to help mitigate the threat – training employees to verify email that looks remotely suspicious, disallowing company data to be stored on personal devices – but these actions alone won’t guarantee prevention from malware, hacks and any other variety of cyberattack.

But, like most business owners, they concentrate first on what they need to keep the business running (or so they think): sales, marketing, managing employees and so forth.

Ask any business owner who probably feels they need another five or six hours in a day to accomplish everything how much time they spend thinking about cyber security, and you’re apt to get a response like, “Yes, I know it’s a threat, but we keep our software up-to-date and this stuff usually happens to someone else.”

Highlighting the scope of the problem

That’s where it’s tempting to launch into full sales mode and say something like, “Did you know that every day over 80,000 variants of malware are released, with thousands of hackers leveling tens of thousands of new hacks against businesses daily? And if that hasn’t frightened you enough, consider that it has been determined someone is hacked every 39 seconds.”

Or, “Would you buy a brand new BMW or Cadillac if it didn’t come with a warranty? Think of your business as the car, and your infrastructure protection as the warranty. You hope you never need it, but if you don’t have it and something happens, it’s costly to fix.”

One of the mistakes that companies make is having their IT support team expand their role to include cyber security. Without the right training, and in the absence of a clear understanding of the link between cyber security as an IT risk and a business risk, companies might focus on the wrong cyber security threats. Inasmuch as there are different business contexts, individual cyber security threats or sources may or may not cause financial, compliance and/or reputation issues. Therefore, companies might treat cyber security purely as an IT risk and could prioritize threats incorrectly.

The case for retaining the services of an experienced MSP

Here is where retaining the services of an experienced managed service provider (MSP) comes into the picture.

For us, part of the “sales” process is education. And it should be that way for everyone in this industry. We all know that there are a lot of organizations that promote themselves as MSPs. But, just as no two drops of rain are the same, neither are any two MSPs identical.

Here is where the education kicks in for us, and it should for anyone seeking to either sell or recommend MSP services.

What to look for in selecting an MSP

Elements to look for when selecting an MSP include:

  • Technical capabilities and experience working within your industry
  • Ability to support complex software infrastructures
  • Single point of contact/dedicated manager assignment
  • Remote and on-site support
  • Globalized service
  • Centralized analytics capabilities
  • Responsiveness and ability to communicate easily
  • Tiered cost system options

The range of security services an MSP can offer is wide, including:

  • Cloud security
  • Compliance monitoring
  • Detection and response services
  • Endpoint security, including monitoring for attacks
  • Firewalls
  • Intrusion detection and reporting
  • Log management and analysis
  • Managing advanced threat defense technologies
  • Penetration testing
  • Virtual private networks, or VPNs
  • Web and email security, such as anti-viral service and spam protection

An MSP should also have a thorough understanding of the compliance regulations that apply not only to their specific industry, but also in the state(s) they operate from. It’s wise to work with a single MSP with the ability to provide security program design and management with comprehensive knowledge of regulatory and standards compliance.

The importance of retaining an MSP that utilizes cutting-edge security management and mitigation tools cannot be overstated. You should look for firms that consistently introduce products designed to detect and alleviate cyber threats.

Many mitigation tools, for example, face challenges with the time and distance between storing and analyzing data. And having an MSP with the tools to meaningfully combat identified threats is an imperative. Many SIEM systems face challenges keeping up with real-time and immediate investigations of threats and acting on them requires a second or third level of effort. An MSP should have the tools to provide real-time monitoring of threats across the entire technological domain, and the ability to analyze large quantities of data to determine where issues/incidents are occurring, as well as the ability to confront and handle threats immediately.

Where cyber threats are concerned, sometimes seconds can make the difference.

IT leaders have a responsibility to educate our clients

As leaders in the field of IT, it is incumbent upon us to educate our prospective clients to make the best and most informed choice when it comes to partnering with an MSP.

A comprehensive portfolio, thorough understanding of industry compliance regulations and an arsenal of leading-edge security management and mitigation tools are the trifecta to look for when choosing a managed service provider. Remember, our prospects and clients have worked far too hard and invested far too much to leave a business vulnerable to cyberattacks. The cost of retaining a well-rounded MSP pales in comparison to the price a business will need to pay if the company is left exposed to threat.

Bio

Al Alper is CEO and Founder of Absolute Logic, which since 1991 has been providing Fortune 500-style technical support and technology consulting to businesses of up to 250 employees within Connecticut and New York. He is also the founder and CEO of CyberGuard360, a firm which develops and markets a solution set of products designed to detect and mitigate threats from cyberattacks. Al is a national speaker on IT and security issues and has authored a series of books, Revealed! which addresses cyber security issues.

Alternative Energy and the War of the Currents

By Marc Cram, Director of Sales, Server Technology

In the 1880s, Thomas Edison and Nikola Tesla battled for the nation’s energy contract in what is now known as the War of the Currents. Edison developed direct current (DC) and it was the standard in the U.S. However, direct current is not easily converted to higher or lower voltages.

Enter Edison’s nemesis, Nikola Tesla. Tesla believed that alternating current (AC) was the solution to the voltage problem. Alternating current reverses direction a certain number of times per second and can be converted to different voltages relatively easily using a transformer.

It’s extraordinary to think that after all this time, there is still an AC/DC conundrum happening and—nowhere is it more prevalent than in the data center power flow that churns the workload, to supply the applications for our digital lives. Consider that even when alternative energy is brought into the mix, these production technologies initially produce DC power and AC power is still being delivered to the IT racks within the data center.

Progress Since the War of the Currents

According to the U.S. Energy Information Administration, the United States has relied on coal, oil and natural gas to provide the majority of the energy consumed since the early 1900s. Nuclear energy was once seen as the clear successor to coal for domestic electricity generation in the U.S., but a series of mishaps over the years has delayed, perhaps permanently, the widespread adoption of nuclear power. In addition, incidents at Three Mile Island (U.S.), Chernobyl (USSR/Russia) and Fukushima (Japan) have made it difficult in the minds of many to justify the growth of nuclear power plants as a source of electricity. Not to mention, the waste byproducts of nuclear fission. The decades-long fight over a long-term storage facility at Yucca Mountain in Nevada has forced many facilities to retain their spent nuclear fuel on site.

And in the early 2000s, solar power was first considered as a potential alternative to carbon-based sources of energy. However, the cost per kWHr has traditionally had difficulty reaching parity with power from coal, oil and natural gas until recently. Even with decades of drawbacks, it’s still vitally important to pursue renewable forms of energy.

The EIA defines renewable energy as “energy from sources that are naturally replenishing but flow-limited. They are virtually inexhaustible in duration but limited in the amount of energy that is available per unit of time.”

It’s important to keep the EIA definition in mind as data center builders are considering renewable power. A mix of renewable power is important because it lessens the strain on the local utilities while also helping them to meet local, state and federal requirements for alternative energy use.

New Alternative Energy Current War?

Since 2001, the uptake of renewable energy has seen slow but steady progress. The figure below provides a detailed breakout of the types of renewable energy used in 2017, with biomass, hydroelectric and wind being the top sources.

Source: https://www.eia.gov/energyexplained/?page=renewable_home

As previously mentioned, the power train supplying a data center has historically been implemented using AC power. From generation to transmission to point of use the power has been AC that gets stepped up or stepped down in voltage, as needed before being converted to DC power by the power supply residing in a server, network switch, router, load balancer or storage application.

On the other end of the power spectrum, many of the renewable energy sources inherently generate one form of electricity or another. Photovoltaic (PV) solar cells generate DC. Biogas and natural gas-powered fuel cells also generate DC. In order to be used in most data centers, the DC power from solar farms and fuel cells goes through an inversion process that turns DC into AC power. This allows the electricity to be transmitted efficiently across a distance and to be put back into “the grid” when not being put into energy storage systems or into loads such as data centers.

Regardless of renewable energy sources, data center locations are still primarily chosen for their proximity to cheap, reliable AC power from one or more utility provider power sources. However, by using renewable energy sources such as wind, solar, fuel cells and hydroelectricity to power data centers, companies can minimize power transmission and conversion losses, reduce their perceived carbon footprint and gain control over their sources of energy production. This allows them to grow their data centers to meet customer demands while complying with local, state and federal environmental impact laws.

But here is the rub: data centers are not situated close enough to the windmill, or to the dam supplying the hydroelectric power, requiring that the data center relies on an AC infeed supplied by a utility to get the electricity from the point-of-generation to the point of consumption. Google’s Sustainability Report underscores this statement by saying, “The places with the best renewable power potential are generally not the same places where a data center can most reliably serve its users. And while our data centers operate 24/7, most renewable energy sources don’t — yet. So, we need to plug into the electricity grid, which isn’t currently very green.”

Who’s Doing Their Part?

Contrary to the “rub” statement, good energy precedent is being set by some of the largest data centers, processing the biggest workloads.

Microsoft has been a notable pioneer in attempting to re-think the power train for their data centers. For example, their data center in Cheyenne, WY is powered from a biogas source supplying fuel cells on site. More recently, Microsoft built an evaluation lab that brings natural gas to the top of the rack and uses fuel cells located there to convert the gas to DC power that is consumed directly by the devices in the rack. This saves on power transmission losses and conversion losses at the expense of deploying some potentially costly fuel cells.

Facebook is also leveraging renewable energy sources and it is predicted that by 2020, the company will have committed to enough new renewable energy resources to equal 100 percent of the energy used by every data center they have built.

For Google’s part, in the same Sustainability Report, as previously mentioned, the company says, “In 2017 Google achieved a great milestone: purchasing 100% renewable energy to match consumption for global operations, including our data centers and offices. We’re investing in a brighter future for the whole industry. And we’re going beyond investing in renewable energy for our own operations—we want to grow the industry as a whole. Not only have we invested $3 billion in renewable energy projects, but we freely share technology that might help others study and respond to environmental challenges.”

Conclusion

The transition to higher adoption of renewable energy production continues for both utilities and consumers while being led and paid for by the largest internet properties around the globe. By working with the utility companies to develop large renewable energy production facilities and committing to purchase the outputs, the data center giants are leading the way to meet the clean energy needs of their businesses and communities.

While renewable energy coming from these projects is still a mix of AC and DC power, in the end, AC power is the common intermediary that joins the point-of-production to the utility and on to the point-of-consumption at the data center. Thus, most enterprise and cloud data centers rely on AC power to run their IT infrastructure. Sorry Edison, this is the “elephant” in the room, conclusion.

Whether your data center is using renewable energy or not, AC power is still the primary infeed to a data center, and AC power is distributed within the data center to the IT rack. For those data centers electing to remain with this tried and true approach, look for ways to help, such as using an intelligent PDU that supports an infeed of 415VAC 3-phase to deliver 240VAC to the outlet without requiring a transformer in the PDU. This helps to minimize conversion and distribution losses, minimize the size of copper cabling required for power and enable maximum power density for the rack, resulting in a greener, more efficient data center.

Bio

Marc Cram is Director of Sales for Server Technology (@ServerTechInc), a brand of Legrand (@Legrand). Marc is driven by a passion to deliver a positive power experience for the data center owner/operator. Marc brings engineering, production, purchasing, marketing, sales and quality expertise from the automotive, PC, semiconductor and data center industries together to give STI customers an unequalled level of support and guidance through the journey of PDU product definition, selection and implementation. Marc earned a BSEE from Rice University and has over 30 years of experience in the field of electronics.

Shifting to the Sky: Where Do Cloud Trends Leave Traditional Data Centers?

By Emil Sayegh, CEO & President of Hostway

Gartner recently made a bold claim: The data center is dead. Along with this proclamation, Gartner predicts that 80% of enterprises will have shut down their traditional data center by 2025, compared to the 10% we see today. Gartner also states that “hybrid cloud is the foundation of digital business” and further estimates that the hybrid cloud market will reach $209 billion in 2019, growing to $317 billion by 2022.

But what current trends and drivers are prompting Gartner’s claims and predictions? And, more importantly, does this mean you should jump ship from your data center to the hybrid cloud?

A Look at the Data Center Footprint

By diving into the current environment and statistical predictions for the future, we can shed some light on Gartner’s perspective. Although annual global IP traffic continues to rise and predictions go even higher (annual global IP traffic is estimated to reach 3.3 zettabytes by 2021), the number of traditional enterprise data centers globally has declined from 8.55 million in 2015 to 8.4 million in 2017 and continues to fall.

Even with data center numbers on the decline, the energy usage and costs associated globally can be shocking. U.S. data centers devour electricity using more than 90 billion kilowatt-hours of electricity a year, and in turn require roughly 34 giant coal-powered plants. Data centers account for approximately 3% of total global electricity usage in 2015, equating to nearly 40% more than the entire United Kingdom. With all these statistics, it comes as no surprise that in 2016 the Data Center Optimization Initiative (DCOI) told federal agencies to reduce the costs of physical data centers by 25% or more, leading 11,404 data centers to be taken offline by May of 2018. While this initiative is cutting costs associated with traditional data centers, the resource burden of these 11,700 federal data centers still must shift elsewhere.

New Tech, New Tools, New Demands on Data Centers 

This shift from the traditional physical data center to newer options comes from more than just cost-cutting mandates—it is sparked and accelerated by the explosion of artificial intelligence, on-demand video streaming and IoT devices. These technologies are being rapidly adopted and require substantially more power and infrastructure flexibility. With 10 billion internet-devices currently in use and projections reaching 20 billion IoT devices in use by 2020, massive increases to data center infrastructure and electricity consumption are required to keep up.

With these mounting demands and the introduction of the Power Usage Effectiveness (PUE) metrics, traditional data centers are evolving through more efficient cooling systems and greener, smarter construction practices for better-regulated buildings, along with greater energy efficiency from storage hardware. Successfully rising to the challenge is achievable, as Google demonstrates by now maintaining an impressive PUE of 1.12 across all its data centers.

Hybrid is The Answer

Despite these advances, enterprises are still relying heavily on public, private and hybrid clouds over data centers, reinforcing Gartner’s position; however, cost and demand are driving shifts from traditional data centers to the hybrid cloud.  While many enterprise organizations assumed a complete transition to the public cloud would solve their issues with legacy systems, this approach ultimately shifted IT pains rather than resolving them. Escalating and unpredictable costs persisted and grew in the public cloud, along with new security concerns.

Despite turning away from data centers and facing new issues in the public cloud, a better and more complete answer can be found in hybrid, custom and multi-cloud solutions – solutions blending the capabilities and benefits of public and private cloud technology with traditional data centers. This comprehensive approach meets the cost, security and compliance needs of enterprise organizations. With custom solutions providing better tools, better management methods and easier migrations, the future looks more hopeful with hybrid and multi-clouds being the “new normal” for business. As AWS introduced its AWS Outposts product following Microsoft’s introduction of the hybrid Azure Stack, the IT landscape truly begins to transform into this new normal.

More than Surviving, Data Centers Evolve and Thrive 

As they are streamlined and made stronger through hybrid and custom platforms, data centers are not in fact dead but instead evolved to be more efficient and support new solutions. Emerging approaches to storage, computing and physical space continue to make the data center a relevant component in today’s IT equation for enterprise businesses.

Through even more efficient approaches like hyperconvergence and hyperscale, hybrid and multi-cloud solutions can simplify migrations, reduce cost and improve agility. These innovative new techniques in data storage and computing are proving to save organizations—and government agencies—from costly expansions and lagging operations. Additionally, physical improvements like airflow management, liquid cooling, microgrids and more are breathing new life into legacy infrastructures.

Keeping Up with the Cutting Edge

As traditional data centers are evolving for a new IT era, the landscape has clearly become more complex than ever before. Keeping up requires the expertise of IT partners that have data center expertise, and that can also provide the necessary geodiversity, interconnection services, tools and experience from migration to management. Partnering also allows organizations to leverage experts that can rationalize public cloud workload placement and offer “as-a-service” offerings to alleviate some of the cost and resource pain points that organizations sometimes run into when trying to implement changes using their stretched internal IT staff. Building this network of partners to enable and integrate diverse platforms is just another component in the evolutionary change of the IT environment.

Working the Kinks Out of Workloads

Mark Gaydos, Chief Marketing Officer, Nlyte Software

As we look at the issues data centers will face in 2019, it’s clear that it’s not all about power consumption. There is an increasing focus on workloads, but, unlike in the past, these workloads are not contained within the walls of a single facility rather, they are scattered across multiple data centers, co-location facilities, public clouds, hybrid clouds and the edge. In addition, there has been a proliferation of devices scattered from microdata centers down to IoT sensors that are utilized by agriculture, smart cities, restaurants and healthcare. Due to this sprawl, IT infrastructure managers will need better visibility into the end-to-end network to ensure smooth workload processing.

If data center managers fail to obtain a more in-depth understanding of what is happening in the network, applications will begin to lag, security problems due to old versions of firmware will arise and non-compliance issues will be experienced. Inevitably, those data center managers who choose to not obtain a deep level of operational understanding will find their facilities in trouble because they don’t have the visibility and metrics needed to see what’s really happening.

You Can’t Manage What You Don’t Know

In addition to the aforementioned issues, if the network is not properly scrutinized with a high level of granularity, operating costs will begin to increase because it will become more and more difficult to obtain a clear understanding of all hardware and software pieces that are now sprawled out to the computing edge. Managers will always be held accountable for all devices and software running on the network no matter where it is located. However, those managers who are savvy enough to deploy a technology asset management (TAM) system will avoid many hardware and software problems with the ability to collect more in-depth information. With more data collected, these managers now have a single source of truth—for the entire network—to better manage security, compliance and software licensing.

Additionally, a full understanding of the devices and configurations responsible for processing workloads across this diverse IT ecosystem will help applications run smoothly. Managers need a TAM solution to remove many challenges that inhibit a deep dive into the full IT ecosystem because today, good infrastructure management is no longer only about the cabling and devices neatly stacked within the racks. Now, data center managers need to grasp how a fractured infrastructure, spread across physical and virtual environments, is still a unified entity that impacts all workloads and application performance.

Finding the Truth in Data

The ability to view a single source of truth gleaned from data gathered across the entire infrastructure sprawl, will also help keep OPEX costs in check. Deploying a TAM solution combines financial, inventory and contractual functions to optimize spending and support lifecycle management. Being armed with this enhanced data set promotes strategic, balance sheet decisions.

Data center managers must adjust how they view and interact with their total operations. It’s about looking at those operations from the applications first—where they’re running—then tracing it back through the infrastructure. With a macro point-of-view, managers will now be better equipped to optimize the workloads, at the lowest cost, while also ensuring the best service level agreements possible.

It’s true, no two applications ever run alike. Some applications may need to be in containers or special environments due to compliance requirements and others may move around. An in-depth understanding of the devices and the workloads that process these applications is critically important because you do not want to make wrong decisions and put an application into a public cloud when it must have the security and/or compliance required from a private cloud.

Most organizations will continue to grow in size and as they do, the IT assets required to support operations will also increase in number. Using a technology asset management system as the single source of truth is the best way to keep track and maintain assets regardless of where they are residing on today’s virtual or sprawled-out networks. Imagine how difficult it would be to find these answers if your CIO or CFO came to you and asked the following questions—without a TAM solution in place:

  • Are all our software licenses currently being used and are they all up to date?
  • How many servers do we have running now and how many can we retire next quarter?
  • Our ERP systems are down and the vendor says we owe them $1M in maintenance fees before they help us. Is this correct?

IT assets will always be dynamic and therefore must be meticulously tracked all the time. Laptops are constantly on the move, servers are shuffled around or left in a depleted zombie state and HR is constantly hiring or letting employees go. Given that data center managers must now share IT asset information with many business units, it’s imperative that a fresh list is continually maintained.

We are all embarking upon a new digital world where the essence of network performance resides on having a level of interrelationship understanding for hardware to software, that previous IT managers never had to contend with. Leveraging new tools for complete network and workload visibility will provide the full transparency necessary to ensure smooth operations in our distributed IT ecosystem.

Bio

Mark Gaydos is CMO at Nlyte where he oversees teams that help organizations understand the value of automating and optimizing how they manage their computing infrastructure.

Security Measures to Consider When Migrating to the Cloud

By Brian Wilson, Director of Information Technology, BitTitan

As more enterprises begin migrating to the cloud, the question of cybersecurity is increasingly urgent. While cloud migration offers many benefits, it’s key to understand your company’s overall goals. Security and data protection can be maintained and even enhanced by a move to the cloud, but the appropriate processes and procedures must be understood and implemented for safeguards to be effective.

Set Appropriate Goals

Problems arise if you fail to understand or adequately set your company’s cloud-migration goals. The cloud is a big amorphous term. Companies can get stuck when they find themselves in a “boiling the ocean” scenario. Migration projects must be broken down into deliverable actions with a realistic timeline.

It’s sometimes easy to assume the cloud is the panacea, especially with the cloud’s cost-cutting benefits. Cost is certainly a motivating factor, but the cloud is not a cost-cutting solution for every situation in every business. For example, an inappropriately-sized cloud environment that’s larger than a company requires will escalate costs.

It’s crucial to understand what an organization will gain in terms of flexibility, security and compliance. Most operating systems will work in the cloud, offering flexibility on the software and workloads they deploy. In addition, many cloud companies make significant investments in security, which are much bigger than what an individual company’s IT department could make.

Take a Holistic View

Fundamentally, the overall migration process remains the same, whether you’re moving from on-premises-to-cloud or cloud-to-cloud. Though in an on-prem environment, most companies are working with known systems and tool sets for security, network monitoring or mobile device management. Those existing tools might not translate to the cloud, even if fundamentally, your processes haven’t changed. It’s important to plan for having the right set of security processes and tools during a migration that presents a hybrid infrastructure, either temporarily during the migration, or as part of the ongoing architecture.

Given this, it’s vital to take a holistic view and evaluate the total environment so you can plan how to manage, monitor and secure operations within the cloud. Also, it’s important to understand that migration often brings new security responsibilities to managed service providers (MSPs) and their clients. These might include new application scanning tools, intrusion detection systems with event logging, internal firewalls for individual applications and database or data-at-rest encryption.

Though the underlying platform is the cloud provider’s provenance, it’s up to enterprises to decide how the platform will be used, what data will reside there, who will have access to it and how it will be protected. By thinking holistically about these things, you’ll be more successful in achieving the appropriate level of cybersecurity protection.

Stay Vigilant

The quest to guard against cyberthreats is never-ending. The cloud and all things associated with it are always evolving, and it’s a constant battle to stay one step ahead of the bad actors.

Therefore, companies must understand their risk profile and the level of protection they need. For example, businesses that handle personal data such as names, phone numbers, social security or credit card numbers, or medical info will likely have higher risk profiles than those who do not.

Sensitive data must be safeguarded, while appropriate employee education and procedures must be in place. The key to understanding your risk profile is to identify possible threats, and with that in mind, consider where you might be most vulnerable — both internally and externally. Use that information to drive conversations about the level of risk tolerance that is acceptable for your organization. In turn, this will define the level of investment required to minimize or mitigate any existing gaps in your risk profile.

Remember: regardless of whether data lives on-prem or in the cloud, the number-one security threat is still human error when it comes to data breaches caused by phishing attempts or ransomware. Companies should educate employees on appropriate procedures, while also leveraging their provider’s security tips and offerings. This often involves communicating risks, making security a responsibility for all staff and providing people with routine training.

Not All Data is Equal

Finally, companies should understand how to differentiate and classify sensitive and non-sensitive data. Companies can come to rely on their MSP’s abilities to automate data storage and security.

For larger corporations that may be running an Azure environment, for example, there’s greater willingness to rely on their MSPs to automate various provisioning activities. If an organization wants more control in those areas, they must be aware of their responsibility to turn those features off.

Additionally, regarding governance, companies get far greater leverage through automation methods that can facilitate application deployment, perform routine maintenance tasks to provide a level of uniformity that follows best practices and simplify compliance accreditation.

As a company considers a cloud migration, the simple edict is to understand from where you’re starting and where you ultimately hope to land — all before beginning a migration project. A clear vision of what your company wants to accomplish will ultimately determine your success. It’s a new environment that requires support from everyone involved.

Bio

Brian Wilson is the Director of Information Technology at BitTitan, where he specializes in the areas of IT strategy, roadmaps, enterprise systems and cloud/SaaS technologies. Prior to joining BitTitan, Brian worked as an executive with San Jose-based IT services company Quantum and in various IT consultant roles with Cascade Technology Consulting, PricewaterhouseCoopers and the Application Group. Brian has over 25 years of experience as a senior IT executive, with an industry background that spans high technology, consulting, commercial real estate and manufacturing.

Five Emerging Trends for MSPs and IT Pros in 2019

By Mark Kirstein, Vice President of Products, BitTitan

The new year brings a wave of eagerness and ambition for innovators across industries. For IT professionals and managed service providers (MSPs), this often means setting new business goals. For instance, in 2019 MSPs or IT firms may be considering new service offerings, building a new core competency, or simply growing revenue and improving profitability.

Regardless of the goal, as part of this process, it is often helpful to think about trends surrounding the adoption of technology solutions. At BitTitan, we’ve been thinking about this and want to share our thoughts on what to expect in the coming year:

1. Cloud solution adoption makes its way through the early majority

If a company is only using on-premise technology versus cloud-based solutions, they’re likely falling behind the times.

Consider, as just one example, email hosted in the cloud. According to a recent survey from Gartner, just shy of 25 percent of public companies have made the jump to cloud email services, with adoption rates among SMBs even higher. In the coming year, we expect to see many more SMBs and enterprises alike moving to cloud-based email – the end of the early adopters and the beginning of the early majority.

Given this, MSPs and IT firms may want to do an audit of technology solutions and workstreams under their management to evaluate whether on-premise solutions would be more cost-effective if they were transitioned to the cloud.

2. Fueling the fire of cloud adoption

Remember that the enthusiasm for cloud-based solutions is being fueled by a number of factors, not just email. Consider that:

  • Many businesses have already successfully migrated email and/or other work in the cloud, boosting the confidence for those who were once wary of cloud solutions.
  • Cloud providers like Microsoft are increasing license costs and shortening support cycles of on-premise solutions, pushing businesses toward cloud alternatives. As a result, maintaining this legacy infrastructure is becoming more costly for IT.
  • Security concerns previously prevented people from moving to the cloud, but these concerns are being addressed. Cloud solutions can provide a higher level of security and are better maintained by cloud providers like Microsoft or Google through regular updates and patches to address new cyber threats. The same cannot be said for on-prem systems.

3. Customers are becoming more savvy about the cloud

While the last decade has primarily focused on why and how organizations should move to the cloud, in the next decade we’ll see more managers focused on optimizing their cloud services. Tech professionals will be sophisticated when selecting cloud providers and adopting new services.

For instance, they may take a multi-cloud approach for more flexibility and room for negotiation, helping to stave off vendor lock-in while allowing businesses to host workloads with the cloud provider that makes the most sense for specific business objectives.

As a result, managing IT environments will become more complex. Hybrid and multi-cloud strategies dominate, but department-level technology decisions are influencing an influx of SaaS solutions. These solutions can be challenging for IT teams who manage governance and ensure broader business integration. As this trend continues in 2019, MSPs will seek additional software management solutions to ease the transition and troubleshooting.

4. The market for specialists heats up

Companies will move away from generalists to tackle their cloud needs, and MSPs might consider specializing in one particular area to distinguish themselves from competitors. A wealth of user technology is available — such as container services to move applications, serverless computing, blockchain applications and automation to manage IT environments — and more specialists are necessary to effectively manage the tech field’s growing landscape.

Also, look for MSPs to further establish vertical specialties in industries such as health care or education, where speaking the end user’s language and understanding their specific ecosystem’s needs, challenges, and technical solutions gives MSPs a leg up.

5. Governance further commands attention

Another primary focus for IT in 2019 will be improved security and governance practices. For those coming from on-prem infrastructure with well-established processes, cloud governance looks far different. IT and MSPs have an opportunity to review and update these processes to ensure they’re appropriate for cloud-based systems. In addition to dictating where data is stored and for how long, governance plans also should address the availability, usability, and integrity of data.

Also, IT managers must ensure migration plans – whether to the cloud or between clouds – have security as a core tenant of its execution. Cyberthreats are only becoming more sophisticated, and any organization, regardless of size or industry, is vulnerable. Educate users about cyberthreats, and keep systems and applications up-to-date, while exploring other options to ensure all bases are covered.

Despite new challenges in 2019, the outlook for IT professionals and the service provider landscape remains strong. Technology leaders continuing to look ahead and purposefully approach the cloud will help their organizations execute on their visions in the coming year and beyond.

Bio

Mark Kirstein is the Vice President, Products at BitTitan, leading product development and product management teams for the company’s SaaS solutions. Prior to BitTitan, Mark served as the Senior Director of Product Management for the Mobile Enterprise Software division of Motorola Solutions, continuing in that capacity following its acquisition by Zebra Technologies in 2014. Mark has over two decades of experience overseeing product strategy, development, and go-to-market initiatives.

When not on the road coaching his daughter’s softball team, Mark enjoys spending time outdoors and rooting for the Boston Red Sox. He holds a bachelor’s degree in Computer Science from California Polytechnic State University.

The Elements of a Good Disaster Recovery Plan

By Tim Mullahy, Executive Vice President and Managing Director at Liberty Center One

No one wants their business to have to weather a disaster – but sometimes they happen. If you go in without any concept of what you’re doing, you’re more or less guaranteed to be in crisis. But if you go in with a well-established disaster recovery plan? You’ll be able to survive just about anything.

Sometimes, bad things happen. Sometimes, those bad things are unavoidable. And sometimes, they impact your business in a way that could potentially lose clients, customers, and employees.

In today’s climate, your business faces a massive volume of threats, spread across a larger threat surface than ever before. Disaster recovery is critical to your security posture, as it’s often not a question of if you’ll suffer a cyber-incident, but rather of when.

Whether or not your organization survives a disaster largely depends on one thing – how well you’ve prepared yourself for it. With a good disaster recovery plan, you can weather just about any storm. Let’s talk about what such a plan involves.

A Clear Idea Of Potential Threats

It’s impossible to identify every single risk your business could possibly face – nor should you put time and resources into doing so. Instead, focus on the disasters you’re likeliest to face. For instance, a business located in Vancouver probably doesn’t have to worry about a tornado, but there’s always a chance that it could be struck by a flood.

When coming up with this list, consider your industry, the technology you use, your geographical location, and the political climate where you’re located. Incidents that impact all businesses include ransomware, malware, hardware failure, software failure, power loss, and human error. Targeted attacks are another threat to your organization, particularly if you work in a high-security space – you may even end up in the crosshairs of a state-sponsored black hat.

Ideally, your crisis response plan needs to be flexible enough to deal with any incident you deem likely, and adaptable enough that it can be applied when you encounter an unexpected disaster.

An Inventory Of All Critical Assets

What systems, processes, and data can your organization not survive without? What hardware is especially important to your core business, and what sort of tolerance does your entire organization have for downtime and data loss? Make a list of every asset you control, both hardware and software, and arrange that list in order from most important to least important.

From there, you want to ask yourself a few questions.

First, what systems are absolutely business-critical? This is hardware and software your business cannot operate without – stuff you need to get as close to 100% uptime as possible. This could include the server that hosts a customer-facing application if you need an example.

Second, what data do you need to protect? Healthcare organizations, for example, are required to keep redundant backups of all patient data and to ensure that data is encrypted and accessible at all times. Figure out what files are most business-critical and prioritize those in your response plan.

Third, for the assets mentioned above, what is their tolerance to downtime? If those systems do go down, how much revenue will you potentially lose for each minute they’re offline? Are there any other considerations aside from revenue that mark them as important?

For instance, a communications platform for first responders needs 100% uptime – lives literally depend on it.

Finally, what can you do without? If you run a home-repair business that brings in customers mostly through word of mouth, your website going down probably won’t be too harmful to your bottom line. If, on the other hand, you’re an eCommerce store, your website is likely one of the most important assets you’ve got.

As you’ve no doubt surmised, no two disaster recovery plans are going to look the same. Every business has different needs and requirements. Every business has different assets they need to protect, and a different level of tolerance for downtime.

Once you’ve figured out your critical assets, ensure you have backups and redundant systems in place. These failover methods need to be thoroughly tested. You must be absolutely certain they’re in working order; you don’t want to find out the files on your backup server are corrupt after you’ve lost your hardware in a flood.

Accounting For People

Too many disaster recovery plans neglect the business’s most important resource – its people. How will employees escape the building during a catastrophic event? What should each staffer do during an emergency? Who’s responsible for coordinating emergency communication, reaching out to shareholders, and ensuring all critical systems failed over properly?

Ensure that roles and responsibilities during an incident are clearly-defined and well-established. More importantly, your plan needs to include guidelines for how to shift responsibility. If the staffer who’s meant to handle coordination of their colleagues during a fire is on vacation, who steps into the role?

Your disaster recovery plan needs to account for these details, while also including a means of disseminating information between employees. Ideally, you’ll want a crisis communication platform of some kind. Ensure that everyone has access to that platform.

When establishing your communications guidelines, make sure you attend to the following:

  • How you will keep in touch with partners and shareholders
  • How you will notify customers of the incident
  • How employees will communicate during the incident

Seeing To Recovery & Service Restoration

So, you weathered the storm. Your business is still standing. Good – now it’s time for recovery.

You should already have a good idea of what services are most critical to your business from the inventory you performed, so this is a fairly simple process to figure out which ones to restore first.

What you need to establish beyond service restoration is who you’ll reach out to, and how you’ll reach out to them. If clients or shareholders suffered monetary losses during the incident, how will you reimburse them? After the crisis has subsided, what will you do to improve your response in the next incident?

Practice and Evaluation

It’s been said that no plan survives first contact with the enemy. That’s true of disaster recovery, as well – if you leave your plan untested and unevaluated until your first disaster, it’s extremely likely you’re going to find weaknesses at the worst possible time. To identify areas that need improvement and familiarize staff with their responsibilities, run regular practice scenarios.

Additionally, you should constantly revisit your disaster recovery plan. Don’t approach it as a project. Approach it as a process.

Always look for ways you can improve it. Regularly revisit and re-evaluate it in light of new technology or new threats. And never assume you’ve done enough.

You can always be better.

Don’t Let A Crisis Cripple Your Business

Natural disasters. Hardware failure. Hackers and rogue employees. Malware and ransomware. The array of different threats facing your organization is absolutely staggering. A good crisis response and disaster recovery plan is critical if you’re to survive – critical to establishing a good cybersecurity posture.

Bio

Tim Mullahy is the Executive Vice President and Managing Director at Liberty Center One, a new breed of data center located in Royal Oak, MI. Tim has a demonstrated history of working in the information technology and services industry.