Category: Featured Article

Shifting to the Sky: Where Do Cloud Trends Leave Traditional Data Centers?

By Emil Sayegh, CEO & President of Hostway

Gartner recently made a bold claim: The data center is dead. Along with this proclamation, Gartner predicts that 80% of enterprises will have shut down their traditional data center by 2025, compared to the 10% we see today. Gartner also states that “hybrid cloud is the foundation of digital business” and further estimates that the hybrid cloud market will reach $209 billion in 2019, growing to $317 billion by 2022.

But what current trends and drivers are prompting Gartner’s claims and predictions? And, more importantly, does this mean you should jump ship from your data center to the hybrid cloud?

A Look at the Data Center Footprint

By diving into the current environment and statistical predictions for the future, we can shed some light on Gartner’s perspective. Although annual global IP traffic continues to rise and predictions go even higher (annual global IP traffic is estimated to reach 3.3 zettabytes by 2021), the number of traditional enterprise data centers globally has declined from 8.55 million in 2015 to 8.4 million in 2017 and continues to fall.

Even with data center numbers on the decline, the energy usage and costs associated globally can be shocking. U.S. data centers devour electricity using more than 90 billion kilowatt-hours of electricity a year, and in turn require roughly 34 giant coal-powered plants. Data centers account for approximately 3% of total global electricity usage in 2015, equating to nearly 40% more than the entire United Kingdom. With all these statistics, it comes as no surprise that in 2016 the Data Center Optimization Initiative (DCOI) told federal agencies to reduce the costs of physical data centers by 25% or more, leading 11,404 data centers to be taken offline by May of 2018. While this initiative is cutting costs associated with traditional data centers, the resource burden of these 11,700 federal data centers still must shift elsewhere.

New Tech, New Tools, New Demands on Data Centers 

This shift from the traditional physical data center to newer options comes from more than just cost-cutting mandates—it is sparked and accelerated by the explosion of artificial intelligence, on-demand video streaming and IoT devices. These technologies are being rapidly adopted and require substantially more power and infrastructure flexibility. With 10 billion internet-devices currently in use and projections reaching 20 billion IoT devices in use by 2020, massive increases to data center infrastructure and electricity consumption are required to keep up.

With these mounting demands and the introduction of the Power Usage Effectiveness (PUE) metrics, traditional data centers are evolving through more efficient cooling systems and greener, smarter construction practices for better-regulated buildings, along with greater energy efficiency from storage hardware. Successfully rising to the challenge is achievable, as Google demonstrates by now maintaining an impressive PUE of 1.12 across all its data centers.

Hybrid is The Answer

Despite these advances, enterprises are still relying heavily on public, private and hybrid clouds over data centers, reinforcing Gartner’s position; however, cost and demand are driving shifts from traditional data centers to the hybrid cloud.  While many enterprise organizations assumed a complete transition to the public cloud would solve their issues with legacy systems, this approach ultimately shifted IT pains rather than resolving them. Escalating and unpredictable costs persisted and grew in the public cloud, along with new security concerns.

Despite turning away from data centers and facing new issues in the public cloud, a better and more complete answer can be found in hybrid, custom and multi-cloud solutions – solutions blending the capabilities and benefits of public and private cloud technology with traditional data centers. This comprehensive approach meets the cost, security and compliance needs of enterprise organizations. With custom solutions providing better tools, better management methods and easier migrations, the future looks more hopeful with hybrid and multi-clouds being the “new normal” for business. As AWS introduced its AWS Outposts product following Microsoft’s introduction of the hybrid Azure Stack, the IT landscape truly begins to transform into this new normal.

More than Surviving, Data Centers Evolve and Thrive 

As they are streamlined and made stronger through hybrid and custom platforms, data centers are not in fact dead but instead evolved to be more efficient and support new solutions. Emerging approaches to storage, computing and physical space continue to make the data center a relevant component in today’s IT equation for enterprise businesses.

Through even more efficient approaches like hyperconvergence and hyperscale, hybrid and multi-cloud solutions can simplify migrations, reduce cost and improve agility. These innovative new techniques in data storage and computing are proving to save organizations—and government agencies—from costly expansions and lagging operations. Additionally, physical improvements like airflow management, liquid cooling, microgrids and more are breathing new life into legacy infrastructures.

Keeping Up with the Cutting Edge

As traditional data centers are evolving for a new IT era, the landscape has clearly become more complex than ever before. Keeping up requires the expertise of IT partners that have data center expertise, and that can also provide the necessary geodiversity, interconnection services, tools and experience from migration to management. Partnering also allows organizations to leverage experts that can rationalize public cloud workload placement and offer “as-a-service” offerings to alleviate some of the cost and resource pain points that organizations sometimes run into when trying to implement changes using their stretched internal IT staff. Building this network of partners to enable and integrate diverse platforms is just another component in the evolutionary change of the IT environment.

Working the Kinks Out of Workloads

Mark Gaydos, Chief Marketing Officer, Nlyte Software

As we look at the issues data centers will face in 2019, it’s clear that it’s not all about power consumption. There is an increasing focus on workloads, but, unlike in the past, these workloads are not contained within the walls of a single facility rather, they are scattered across multiple data centers, co-location facilities, public clouds, hybrid clouds and the edge. In addition, there has been a proliferation of devices scattered from microdata centers down to IoT sensors that are utilized by agriculture, smart cities, restaurants and healthcare. Due to this sprawl, IT infrastructure managers will need better visibility into the end-to-end network to ensure smooth workload processing.

If data center managers fail to obtain a more in-depth understanding of what is happening in the network, applications will begin to lag, security problems due to old versions of firmware will arise and non-compliance issues will be experienced. Inevitably, those data center managers who choose to not obtain a deep level of operational understanding will find their facilities in trouble because they don’t have the visibility and metrics needed to see what’s really happening.

You Can’t Manage What You Don’t Know

In addition to the aforementioned issues, if the network is not properly scrutinized with a high level of granularity, operating costs will begin to increase because it will become more and more difficult to obtain a clear understanding of all hardware and software pieces that are now sprawled out to the computing edge. Managers will always be held accountable for all devices and software running on the network no matter where it is located. However, those managers who are savvy enough to deploy a technology asset management (TAM) system will avoid many hardware and software problems with the ability to collect more in-depth information. With more data collected, these managers now have a single source of truth—for the entire network—to better manage security, compliance and software licensing.

Additionally, a full understanding of the devices and configurations responsible for processing workloads across this diverse IT ecosystem will help applications run smoothly. Managers need a TAM solution to remove many challenges that inhibit a deep dive into the full IT ecosystem because today, good infrastructure management is no longer only about the cabling and devices neatly stacked within the racks. Now, data center managers need to grasp how a fractured infrastructure, spread across physical and virtual environments, is still a unified entity that impacts all workloads and application performance.

Finding the Truth in Data

The ability to view a single source of truth gleaned from data gathered across the entire infrastructure sprawl, will also help keep OPEX costs in check. Deploying a TAM solution combines financial, inventory and contractual functions to optimize spending and support lifecycle management. Being armed with this enhanced data set promotes strategic, balance sheet decisions.

Data center managers must adjust how they view and interact with their total operations. It’s about looking at those operations from the applications first—where they’re running—then tracing it back through the infrastructure. With a macro point-of-view, managers will now be better equipped to optimize the workloads, at the lowest cost, while also ensuring the best service level agreements possible.

It’s true, no two applications ever run alike. Some applications may need to be in containers or special environments due to compliance requirements and others may move around. An in-depth understanding of the devices and the workloads that process these applications is critically important because you do not want to make wrong decisions and put an application into a public cloud when it must have the security and/or compliance required from a private cloud.

Most organizations will continue to grow in size and as they do, the IT assets required to support operations will also increase in number. Using a technology asset management system as the single source of truth is the best way to keep track and maintain assets regardless of where they are residing on today’s virtual or sprawled-out networks. Imagine how difficult it would be to find these answers if your CIO or CFO came to you and asked the following questions—without a TAM solution in place:

  • Are all our software licenses currently being used and are they all up to date?
  • How many servers do we have running now and how many can we retire next quarter?
  • Our ERP systems are down and the vendor says we owe them $1M in maintenance fees before they help us. Is this correct?

IT assets will always be dynamic and therefore must be meticulously tracked all the time. Laptops are constantly on the move, servers are shuffled around or left in a depleted zombie state and HR is constantly hiring or letting employees go. Given that data center managers must now share IT asset information with many business units, it’s imperative that a fresh list is continually maintained.

We are all embarking upon a new digital world where the essence of network performance resides on having a level of interrelationship understanding for hardware to software, that previous IT managers never had to contend with. Leveraging new tools for complete network and workload visibility will provide the full transparency necessary to ensure smooth operations in our distributed IT ecosystem.

Bio

Mark Gaydos is CMO at Nlyte where he oversees teams that help organizations understand the value of automating and optimizing how they manage their computing infrastructure.

Security Measures to Consider When Migrating to the Cloud

By Brian Wilson, Director of Information Technology, BitTitan

As more enterprises begin migrating to the cloud, the question of cybersecurity is increasingly urgent. While cloud migration offers many benefits, it’s key to understand your company’s overall goals. Security and data protection can be maintained and even enhanced by a move to the cloud, but the appropriate processes and procedures must be understood and implemented for safeguards to be effective.

Set Appropriate Goals

Problems arise if you fail to understand or adequately set your company’s cloud-migration goals. The cloud is a big amorphous term. Companies can get stuck when they find themselves in a “boiling the ocean” scenario. Migration projects must be broken down into deliverable actions with a realistic timeline.

It’s sometimes easy to assume the cloud is the panacea, especially with the cloud’s cost-cutting benefits. Cost is certainly a motivating factor, but the cloud is not a cost-cutting solution for every situation in every business. For example, an inappropriately-sized cloud environment that’s larger than a company requires will escalate costs.

It’s crucial to understand what an organization will gain in terms of flexibility, security and compliance. Most operating systems will work in the cloud, offering flexibility on the software and workloads they deploy. In addition, many cloud companies make significant investments in security, which are much bigger than what an individual company’s IT department could make.

Take a Holistic View

Fundamentally, the overall migration process remains the same, whether you’re moving from on-premises-to-cloud or cloud-to-cloud. Though in an on-prem environment, most companies are working with known systems and tool sets for security, network monitoring or mobile device management. Those existing tools might not translate to the cloud, even if fundamentally, your processes haven’t changed. It’s important to plan for having the right set of security processes and tools during a migration that presents a hybrid infrastructure, either temporarily during the migration, or as part of the ongoing architecture.

Given this, it’s vital to take a holistic view and evaluate the total environment so you can plan how to manage, monitor and secure operations within the cloud. Also, it’s important to understand that migration often brings new security responsibilities to managed service providers (MSPs) and their clients. These might include new application scanning tools, intrusion detection systems with event logging, internal firewalls for individual applications and database or data-at-rest encryption.

Though the underlying platform is the cloud provider’s provenance, it’s up to enterprises to decide how the platform will be used, what data will reside there, who will have access to it and how it will be protected. By thinking holistically about these things, you’ll be more successful in achieving the appropriate level of cybersecurity protection.

Stay Vigilant

The quest to guard against cyberthreats is never-ending. The cloud and all things associated with it are always evolving, and it’s a constant battle to stay one step ahead of the bad actors.

Therefore, companies must understand their risk profile and the level of protection they need. For example, businesses that handle personal data such as names, phone numbers, social security or credit card numbers, or medical info will likely have higher risk profiles than those who do not.

Sensitive data must be safeguarded, while appropriate employee education and procedures must be in place. The key to understanding your risk profile is to identify possible threats, and with that in mind, consider where you might be most vulnerable — both internally and externally. Use that information to drive conversations about the level of risk tolerance that is acceptable for your organization. In turn, this will define the level of investment required to minimize or mitigate any existing gaps in your risk profile.

Remember: regardless of whether data lives on-prem or in the cloud, the number-one security threat is still human error when it comes to data breaches caused by phishing attempts or ransomware. Companies should educate employees on appropriate procedures, while also leveraging their provider’s security tips and offerings. This often involves communicating risks, making security a responsibility for all staff and providing people with routine training.

Not All Data is Equal

Finally, companies should understand how to differentiate and classify sensitive and non-sensitive data. Companies can come to rely on their MSP’s abilities to automate data storage and security.

For larger corporations that may be running an Azure environment, for example, there’s greater willingness to rely on their MSPs to automate various provisioning activities. If an organization wants more control in those areas, they must be aware of their responsibility to turn those features off.

Additionally, regarding governance, companies get far greater leverage through automation methods that can facilitate application deployment, perform routine maintenance tasks to provide a level of uniformity that follows best practices and simplify compliance accreditation.

As a company considers a cloud migration, the simple edict is to understand from where you’re starting and where you ultimately hope to land — all before beginning a migration project. A clear vision of what your company wants to accomplish will ultimately determine your success. It’s a new environment that requires support from everyone involved.

Bio

Brian Wilson is the Director of Information Technology at BitTitan, where he specializes in the areas of IT strategy, roadmaps, enterprise systems and cloud/SaaS technologies. Prior to joining BitTitan, Brian worked as an executive with San Jose-based IT services company Quantum and in various IT consultant roles with Cascade Technology Consulting, PricewaterhouseCoopers and the Application Group. Brian has over 25 years of experience as a senior IT executive, with an industry background that spans high technology, consulting, commercial real estate and manufacturing.

Five Emerging Trends for MSPs and IT Pros in 2019

By Mark Kirstein, Vice President of Products, BitTitan

The new year brings a wave of eagerness and ambition for innovators across industries. For IT professionals and managed service providers (MSPs), this often means setting new business goals. For instance, in 2019 MSPs or IT firms may be considering new service offerings, building a new core competency, or simply growing revenue and improving profitability.

Regardless of the goal, as part of this process, it is often helpful to think about trends surrounding the adoption of technology solutions. At BitTitan, we’ve been thinking about this and want to share our thoughts on what to expect in the coming year:

1. Cloud solution adoption makes its way through the early majority

If a company is only using on-premise technology versus cloud-based solutions, they’re likely falling behind the times.

Consider, as just one example, email hosted in the cloud. According to a recent survey from Gartner, just shy of 25 percent of public companies have made the jump to cloud email services, with adoption rates among SMBs even higher. In the coming year, we expect to see many more SMBs and enterprises alike moving to cloud-based email – the end of the early adopters and the beginning of the early majority.

Given this, MSPs and IT firms may want to do an audit of technology solutions and workstreams under their management to evaluate whether on-premise solutions would be more cost-effective if they were transitioned to the cloud.

2. Fueling the fire of cloud adoption

Remember that the enthusiasm for cloud-based solutions is being fueled by a number of factors, not just email. Consider that:

  • Many businesses have already successfully migrated email and/or other work in the cloud, boosting the confidence for those who were once wary of cloud solutions.
  • Cloud providers like Microsoft are increasing license costs and shortening support cycles of on-premise solutions, pushing businesses toward cloud alternatives. As a result, maintaining this legacy infrastructure is becoming more costly for IT.
  • Security concerns previously prevented people from moving to the cloud, but these concerns are being addressed. Cloud solutions can provide a higher level of security and are better maintained by cloud providers like Microsoft or Google through regular updates and patches to address new cyber threats. The same cannot be said for on-prem systems.

3. Customers are becoming more savvy about the cloud

While the last decade has primarily focused on why and how organizations should move to the cloud, in the next decade we’ll see more managers focused on optimizing their cloud services. Tech professionals will be sophisticated when selecting cloud providers and adopting new services.

For instance, they may take a multi-cloud approach for more flexibility and room for negotiation, helping to stave off vendor lock-in while allowing businesses to host workloads with the cloud provider that makes the most sense for specific business objectives.

As a result, managing IT environments will become more complex. Hybrid and multi-cloud strategies dominate, but department-level technology decisions are influencing an influx of SaaS solutions. These solutions can be challenging for IT teams who manage governance and ensure broader business integration. As this trend continues in 2019, MSPs will seek additional software management solutions to ease the transition and troubleshooting.

4. The market for specialists heats up

Companies will move away from generalists to tackle their cloud needs, and MSPs might consider specializing in one particular area to distinguish themselves from competitors. A wealth of user technology is available — such as container services to move applications, serverless computing, blockchain applications and automation to manage IT environments — and more specialists are necessary to effectively manage the tech field’s growing landscape.

Also, look for MSPs to further establish vertical specialties in industries such as health care or education, where speaking the end user’s language and understanding their specific ecosystem’s needs, challenges, and technical solutions gives MSPs a leg up.

5. Governance further commands attention

Another primary focus for IT in 2019 will be improved security and governance practices. For those coming from on-prem infrastructure with well-established processes, cloud governance looks far different. IT and MSPs have an opportunity to review and update these processes to ensure they’re appropriate for cloud-based systems. In addition to dictating where data is stored and for how long, governance plans also should address the availability, usability, and integrity of data.

Also, IT managers must ensure migration plans – whether to the cloud or between clouds – have security as a core tenant of its execution. Cyberthreats are only becoming more sophisticated, and any organization, regardless of size or industry, is vulnerable. Educate users about cyberthreats, and keep systems and applications up-to-date, while exploring other options to ensure all bases are covered.

Despite new challenges in 2019, the outlook for IT professionals and the service provider landscape remains strong. Technology leaders continuing to look ahead and purposefully approach the cloud will help their organizations execute on their visions in the coming year and beyond.

Bio

Mark Kirstein is the Vice President, Products at BitTitan, leading product development and product management teams for the company’s SaaS solutions. Prior to BitTitan, Mark served as the Senior Director of Product Management for the Mobile Enterprise Software division of Motorola Solutions, continuing in that capacity following its acquisition by Zebra Technologies in 2014. Mark has over two decades of experience overseeing product strategy, development, and go-to-market initiatives.

When not on the road coaching his daughter’s softball team, Mark enjoys spending time outdoors and rooting for the Boston Red Sox. He holds a bachelor’s degree in Computer Science from California Polytechnic State University.

The Elements of a Good Disaster Recovery Plan

By Tim Mullahy, Executive Vice President and Managing Director at Liberty Center One

No one wants their business to have to weather a disaster – but sometimes they happen. If you go in without any concept of what you’re doing, you’re more or less guaranteed to be in crisis. But if you go in with a well-established disaster recovery plan? You’ll be able to survive just about anything.

Sometimes, bad things happen. Sometimes, those bad things are unavoidable. And sometimes, they impact your business in a way that could potentially lose clients, customers, and employees.

In today’s climate, your business faces a massive volume of threats, spread across a larger threat surface than ever before. Disaster recovery is critical to your security posture, as it’s often not a question of if you’ll suffer a cyber-incident, but rather of when.

Whether or not your organization survives a disaster largely depends on one thing – how well you’ve prepared yourself for it. With a good disaster recovery plan, you can weather just about any storm. Let’s talk about what such a plan involves.

A Clear Idea Of Potential Threats

It’s impossible to identify every single risk your business could possibly face – nor should you put time and resources into doing so. Instead, focus on the disasters you’re likeliest to face. For instance, a business located in Vancouver probably doesn’t have to worry about a tornado, but there’s always a chance that it could be struck by a flood.

When coming up with this list, consider your industry, the technology you use, your geographical location, and the political climate where you’re located. Incidents that impact all businesses include ransomware, malware, hardware failure, software failure, power loss, and human error. Targeted attacks are another threat to your organization, particularly if you work in a high-security space – you may even end up in the crosshairs of a state-sponsored black hat.

Ideally, your crisis response plan needs to be flexible enough to deal with any incident you deem likely, and adaptable enough that it can be applied when you encounter an unexpected disaster.

An Inventory Of All Critical Assets

What systems, processes, and data can your organization not survive without? What hardware is especially important to your core business, and what sort of tolerance does your entire organization have for downtime and data loss? Make a list of every asset you control, both hardware and software, and arrange that list in order from most important to least important.

From there, you want to ask yourself a few questions.

First, what systems are absolutely business-critical? This is hardware and software your business cannot operate without – stuff you need to get as close to 100% uptime as possible. This could include the server that hosts a customer-facing application if you need an example.

Second, what data do you need to protect? Healthcare organizations, for example, are required to keep redundant backups of all patient data and to ensure that data is encrypted and accessible at all times. Figure out what files are most business-critical and prioritize those in your response plan.

Third, for the assets mentioned above, what is their tolerance to downtime? If those systems do go down, how much revenue will you potentially lose for each minute they’re offline? Are there any other considerations aside from revenue that mark them as important?

For instance, a communications platform for first responders needs 100% uptime – lives literally depend on it.

Finally, what can you do without? If you run a home-repair business that brings in customers mostly through word of mouth, your website going down probably won’t be too harmful to your bottom line. If, on the other hand, you’re an eCommerce store, your website is likely one of the most important assets you’ve got.

As you’ve no doubt surmised, no two disaster recovery plans are going to look the same. Every business has different needs and requirements. Every business has different assets they need to protect, and a different level of tolerance for downtime.

Once you’ve figured out your critical assets, ensure you have backups and redundant systems in place. These failover methods need to be thoroughly tested. You must be absolutely certain they’re in working order; you don’t want to find out the files on your backup server are corrupt after you’ve lost your hardware in a flood.

Accounting For People

Too many disaster recovery plans neglect the business’s most important resource – its people. How will employees escape the building during a catastrophic event? What should each staffer do during an emergency? Who’s responsible for coordinating emergency communication, reaching out to shareholders, and ensuring all critical systems failed over properly?

Ensure that roles and responsibilities during an incident are clearly-defined and well-established. More importantly, your plan needs to include guidelines for how to shift responsibility. If the staffer who’s meant to handle coordination of their colleagues during a fire is on vacation, who steps into the role?

Your disaster recovery plan needs to account for these details, while also including a means of disseminating information between employees. Ideally, you’ll want a crisis communication platform of some kind. Ensure that everyone has access to that platform.

When establishing your communications guidelines, make sure you attend to the following:

  • How you will keep in touch with partners and shareholders
  • How you will notify customers of the incident
  • How employees will communicate during the incident

Seeing To Recovery & Service Restoration

So, you weathered the storm. Your business is still standing. Good – now it’s time for recovery.

You should already have a good idea of what services are most critical to your business from the inventory you performed, so this is a fairly simple process to figure out which ones to restore first.

What you need to establish beyond service restoration is who you’ll reach out to, and how you’ll reach out to them. If clients or shareholders suffered monetary losses during the incident, how will you reimburse them? After the crisis has subsided, what will you do to improve your response in the next incident?

Practice and Evaluation

It’s been said that no plan survives first contact with the enemy. That’s true of disaster recovery, as well – if you leave your plan untested and unevaluated until your first disaster, it’s extremely likely you’re going to find weaknesses at the worst possible time. To identify areas that need improvement and familiarize staff with their responsibilities, run regular practice scenarios.

Additionally, you should constantly revisit your disaster recovery plan. Don’t approach it as a project. Approach it as a process.

Always look for ways you can improve it. Regularly revisit and re-evaluate it in light of new technology or new threats. And never assume you’ve done enough.

You can always be better.

Don’t Let A Crisis Cripple Your Business

Natural disasters. Hardware failure. Hackers and rogue employees. Malware and ransomware. The array of different threats facing your organization is absolutely staggering. A good crisis response and disaster recovery plan is critical if you’re to survive – critical to establishing a good cybersecurity posture.

Bio

Tim Mullahy is the Executive Vice President and Managing Director at Liberty Center One, a new breed of data center located in Royal Oak, MI. Tim has a demonstrated history of working in the information technology and services industry.

Cyber Hygiene And You – Five Things You Need To Do For A Strong Security Posture

By Max Emelianov, CEO of HostForWeb

Cyber hygiene shouldn’t be a difficult concept – yet it seems like many organizations struggle with it. Yours might even be among them. Either way, it’s probably better to be safe than sorry. Read on to see if you’ve done everything necessary to keep your security posture strong – and what you still need to improve on.

Hygiene’s pretty important. If you don’t regularly shower, keep your environment clean, and wash your hands, you get sick. By that same vein, if you aren’t actively trying to keep your systems, people, and data safe, your business is going to end up in a spot of trouble.

Trust me, I am going somewhere with this analogy.

Today, we’re going to talk about cyber hygiene. It’s a pretty simple concept, but one that’s surprisingly complicated (and often difficult) to incorporate into your own organization. In essence, it’s everything involved in maintaining a strong security posture and ensuring your infrastructure stays in working order.

There’s actually quite a bit to it, even if we just focus on the security side.

Know Your Risk Profile

First thing’s first, you’re going to want to think like a cybercriminal. What assets or systems are most valuable to someone looking to make a quick buck off your business? What about someone wanting to defraud your organization or its staff, or a competitor looking to steal your intellectual property?

That’s only the first step. Next, you need to think about how a criminal might get access to sensitive assets. What elements of your infrastructure are most vulnerable to attack? Where are you most likely to experience a data breach, and how?

External threats from criminals aren’t the only thing you need to account for. You’ll also need to consider risks like internal bad actors, natural disasters, equipment failure, and more. The most important thing is that you have the security in place to protect yourself from all but the worst threats, and the resilience to survive should your systems still end up compromised.

Speaking of resilience…

Have a Disaster Recovery and Business Continuity Plan

You cannot control the weather. You cannot stop every cyberattack, nor can you account for a malicious insider. Eventually, there is a very good chance your systems will go down, a very good chance you will encounter a crisis of some kind.

How well you make it through that crisis depends on your level of preparation. It depends on how comprehensive and thought-out your disaster recovery and business continuity plans are.  How prepared you are for the worst, in other words.

In broad strokes, a good disaster recovery/business continuity plan establishes the following:

  • Roles and responsibilities in the event of a crisis. Who is in charge of keeping critical infrastructure operational and ensuring failover happens as it should? Who will keep in touch with shareholders and business partners? Ensure every employee understands precisely what their role should be.
  • A response plan for a wide range of emergencies. Figure out what your business is likely to face, and plan to weather that. A general crisis response plan is also important.
  • Critical and non-critical assets. What systems and data are critical to your business? What systems need to operate without interruption, and which ones need to be brought back online as quickly as possible?
  • Communication details. How will people stay in touch? Contact numbers, emails, a crisis communication platform, etc.
  • Major infrastructure. Do you have backup systems in place to ensure there is no interruption of service? Have those systems been adequately tested?
  • Do you retain multiple, redundant backups of critical data? How will you handle sensitive or regulated data?
  • Service recovery. What process will you have for getting services back online after an emergency?
  • Regular testing. This one is self-explanatory. Constantly evaluate and re-evaluate your crisis response plan.

Encourage Safe Practices By Staff

The old adage that your employees are the greatest security risk in your business holds true more than ever these days. Criminals are always going to seek the path of least resistance by default. What that means for you is that if you have nigh-unbreakable security infrastructure, they’ll simply try to gain access by bamboozling your employees.

And even if an employee doesn’t fall victim to the machinations of a hacker, they might still inadvertently compromise your business. Human error is the cause of most data breaches, after all. Unfortunately, there’s only so much you can do to mitigate this.

Do what you can to promote a culture of cybersecurity within your business. Ensure leadership is schooled in the importance of cyber best practices, and ensure you are regularly training and educating your staff on the ins and outs of staying safe in the digital world. More importantly, have systems in place to recognize people who best embrace and embody their role in keeping your organization’s data safe.

Make cybersecurity a part of everyone’s job. Because ultimately, whether you like it or not, it is. That’s not going to change anytime soon.

Don’t Forget About The Basics

We’ve talked about some fairly high-level stuff so far. Processes and policies, training programs, corporate culture, and so on. But the problem is, that’s not actually where the majority of businesses fail at cybersecurity.

As it turns out, most of them struggle with the foundation. In a study carried out by cybersecurity firm Tripwire, it was found that 57% of organizations still struggle with visibility into their networks and systems, taking weeks, months, or longer to detect new devices or services. Many businesses (40%) still aren’t scanning regularly for vulnerabilities, and even more (54%) don’t collect and consolidate critical system logs into a single location.

It gets worse. 31% don’t even have a password policy in place, and 41% aren’t using multi-factor authentication. In short, their cyber hygiene is awful, regardless of any other steps they’re taking to protect their data.

Luckily, it’s fairly easy to avoid falling into the trap that they have:

  • Patch your systems regularly and immediately.
  • Scan for vulnerabilities on a daily basis.
  • Ensure you have complete visibility into all networks and systems within your organization.
  • Implement automated monitoring tools that alert you of any unusual network activity.
  • Multifactor authentication: use it.

Understand That Cybersecurity Is Constantly Evolving

Last but certainly not least, one of the most common cybersecurity traps I see people fall into is the assumption that once their infrastructure is in place, their job is done. They don’t need to worry anymore – their data is safe, at least until next year sometime.

This is a dangerous mindset. The cybersecurity landscape is constantly shifting and evolving. You need to be cognizant of that. You need to pay attention to emerging vulnerabilities, new security techniques, and more.

Because if you’re not paying attention, you’ll simply be left behind.

Closing Thoughts

Whether you’re talking about your infrastructure or yourself, hygiene is critical. Poor personal hygiene can result in sickness and isolation. Poor cyber hygiene can result in lost or misplaced data, data breaches, and productivity bottlenecks.

You don’t want to fall victim to either – and now you know how to avoid both.

Bio

Max Emelianov started HostForWeb in 2001. In his role as HostForWeb’s CEO, he focuses on teamwork and providing the best support for his customers while delivering cutting-edge web hosting services.

The Top 4 Traits of Top Performing MSPs

Key Findings from IT Glue’s Global MSP Benchmark Survey

By Joshua Oakes, Documentation Evangelist, IT Glue

The managed services business is reinventing itself, quickly. Companies are starting to realize the value of process and planning. More MSP owners, having been in the game a while, are starting to think more carefully about their exit strategies. In fact, even if you’re just starting out, you should be thinking about how to maximize the valuation of your business. It’s never too early to start building your equity.

For most MSP owners, when it comes time to retire or leave the business, there’s only a couple of viable options – sell the business, or wind it down. The latter option is problematic because all of the sweat equity the owner puts into the business is for naught. The former option is better, but there’s a problem here, too. Only around 20% of MSPs are sold. This makes sense – most MSPs are very small businesses, with their value deriving almost entirely from one or two key people. Buyers are looking for high-performing MSPs that aren’t reliant on key people, especially if those key people are exiting the business. It’s not easy to get into that top 20% of MSPs, but if you understand what those high performers look like, it becomes a lot easier.

So how do you get there? That Golden Quintile of MSPs that are attractive to prospective buyers – what do they look like? The results of IT Glue’s recent Global MSP Benchmark Survey provided us with some great insight into what the top 20% of performing MSPs actually looks like. Size doesn’t matter – great MSPs range from one-person shops to integrated companies large enough to target small enterprise clients. But there are some common traits that they all share:

High Margins

Some MSPs are earning amazing margins. Net margins of at least 20% are required to get you into the Golden Quintile. There are a couple of key implications to this figure. First, it means that the best-performing MSPs aren’t price cutting in order to win business. They are focusing on the value that they deliver to their clients, and charging fees in accordance with that value. They’ve built their entire sales model around being a premium player in the market. For example, when they talk to prospects, they don’t get sucked into a negotiation about price. Instead, they highlight how they will handle tickets quickly, because the value they bring lies in maintaining as close to 100% uptime as possible. Combine this pricing approach with cost control measures, and you’re on your way.

Rapid Growth

The best-performing MSPs not only earn high margins, but they are growing quickly as well. The top 20% of MSPs are earning growth rates of at least 10% compounded annually. There are three keys to sustained double digit growth.

  • Investment in sales and marketing
    • More than half of MSPs report struggling with sales, marketing or both. But investment in these areas is critical to lead generation and sustained growth.
  • Delivering on your promise
    • Selling great service is one thing, but if you deliver, you’ll gain customers who become evangelists. If lead gen is a pain point, these evangelists are critical for helping you attract new business.
  • Eliminating churn
    • Churn is evil – if you churn 10% of your customers every year you need to add 20% just to hit 10% net growth. Nuts to that. Deliver on your promises and you’ll go a long way to eliminating churn.

Process Orientation

According to Greg Abbott of Aabyss, a leading UK MSP, venture capitalists looking to buy MSPs will add anywhere from 5-15% for a turnkey business. If your business depends on you, the owner, and you are leaving when the sale has been completed, then you will not get the premium valuation you want for your business. You need to build a business that can thrive without you, and that means having a process orientation. First, you need to determine the best processes, perhaps by adopting lean methodology or other process improvement techniques. Second, you need to document your processes. If the buyer feels confident that past performance will be replicable without you, your MSP will be more attractive, and command a higher multiple.

Customer Focus

Not to be lost in all this is having a customer focus. If you truly want to deliver value, then you need to know what your customers value. Find out what their pain points are, and focus on the ways that you can mitigate or eliminate that pain. Having a strong customer focus increases the likelihood that you’ll have lower churn, and be able to earn higher margins while maintaining customer satisfaction.

Getting into the Golden Quintile definitely takes some work, but with a better sense of what the industry’s leaders are doing, it will be easier to get there yourself. IT Glue is a powerful IT documentation platform that contributes in many of these areas, especially delivering great service, optimizing your repeatable processes and lowering the cost of service delivery.

Bio: Joshua Oakes is the Documentation Evangelist for IT Glue, where he strives to produce thought-provoking pieces that help IT service providers improve their business, focusing on lean practices and the value chain.

GDPR – Comply or Pay High Fees

Mark Gaydos, Chief Marketing Officer, Nlyte Software

General Data Protection Regulation (GDPR) is Europe’s new data protection law that standardizes data protection across all 28 EU countries and imposes strict new rules on controlling and processing personally identifiable information (PII). The new mandate replaces the 1995 EU Data Protection Directive, supersedes the 1998 UK Data Protection Act and goes into effect on May 25, 2018. Organizations that are not compliant will be fined up to 4% of their global revenue. Simply put: GDPR extends the protection of personal data and data protection rights by giving control back to EU residents.

Time is running out for data centers to comply with GDPR rules for tracking the location of the data and transport from storage device, to server to the customer. No doubt, IT personnel know that the infrastructure’s physical security is as critical as the digital management of consumer data assets. But the IT physical infrastructure is not confined to the data center’s walls. For this reason GDPR compliance extends to colocation facilities, managed service providers, hosting services, SaaS vendors, and virtually any X-aaS vendor. To mitigate risks, organizations need visibility into their vendors’ IT framework to ensure the integrity of the consumer data they are responsible for.

What are the GDPR requirements? As reported by TechCrunch:

  • Anyone involved in processing EU consumer data, including third-party entities involved in processing data to provide a particular service, can be held liable for a breach.
  • When an individual no longer wants their data to be processed by a company, the data must be deleted, “provided that there are no legitimate grounds for retaining it.”
  • Companies must appoint a data protection officerif they process sensitive data on a large scale or collect information on many consumers (small and midsize enterprises are exempt, if data processing is not their core business).
  • Companies and organizations must notify the relevant national supervisory authority of serious data breachesas soon as possible.
  • Parental consent is required for children under a certain age to use social media(a specific age within a group ranging from ages 13 to 16 will be set by individual countries).
  • There will be a single supervisory authority for data protection complaintsaimed at streamlining compliance for businesses.
  • Individuals have a right to data portabilityto enable them to more easily transfer their personal data between services.

One way to expedite GDPR compliance is using a Data Center Infrastructure Management (DCIM) software solution. DCIM allows an organization to track the location of the data within the physical IT infrastructure, so they know if and when consumer data is transported cross-borders. This DCIM-enhanced, data transport visibility is critical for understanding:

  • Secondary locations of infrastructure for safe handling and transportation of data across borders.
  • The location of critical data as it moves across all network devices — regardless of location.
  • Expedited data breaches.
  • Exact geographic sites and locations of where the data is replicated.
  • All security tools that are deployed, enabled and residing on identified devices.

Since GDPR mandates meeting specific articles, organizations can fully rely on a DCIM software solution to meet the following articles:

  • Article 45 – Transfers on the Basis of an Adequacy Decision – Visibility into the entire lifecycle tracking – with accountability and compliance visibility and reporting.
  • Article 35 – Data Protection Impact Assessment – Workflow feature captures asset and application names while the system is operating or hosting data with the ability to assign a data protection officer’s review activity within any IMAC data center process. Using asset management and asset integrity monitoring in a DCIM allows for easy tracking of data at rest and the infrastructure used for that data. Furthermore, it provides a report of all workflows with a GDPR activity — whether they are active or closed.
  • Article 58 – Investigative Powers – The asset optimization and tracking support feature provides compulsory data protection audits when an organization needs to provide reports.
  • Article 17 – Right to be Forgotten (Right to Erasure) – The Asset Management feature allows controllers to flag/track the lifecycle of assets used for storage or data subjects processing – of all personal customer data. This tracking capability extends from the point of existence (in physical computer infrastructure) through decommissioning or destruction.  This type of visibility into a complete lifecycle record of the data’s physical location is critical to meeting the mandate.
  • Articles 59, 33, 33a – Activity Reports and Data Breach Notification to Authorities – Impact assessment report provides a list of flagged assets for GDPR tracking, providing assets’ location and status. This includes such critical information as mapped business application, data last audited, rack, name, IP address among others.

May 25, 2018 is almost here! Meet the GDPR compliance deadline and avoid hefty fines, put into place a GDPR compliance plan that includes a full-suite DCIM software solution.

Bio: Mark Gaydos is Chief Marketing Officer for Nlyte Software, the leading data center infrastructure management (DCIM) solution provider for seamlessly automating data center operations and infrastructure into an enterprise’s IT ecosystem.

What Businesses Need to Know in the Wake of a Major Data Breach

By Jason Tan, CEO, Sift Science

Online businesses everywhere are going to be dealing with the effects of data breaches in the post-Equifax breach era. It’s a tough truth to swallow, but these large-scale data breaches have become a fact of life – and it’s not just the breached business that pays the price. The reality is, even if your company wasn’t breached, you still have a huge challenge on your hands. As fraudsters mine the valuable data that’s been compromised, all e-commerce sites and financial institutions need to be on alert.

The downstream consequence of a major breach is that stolen information is sold on the dark web many times over. Since two-thirds of people use the same login information on multiple sites, when fraudsters get ahold of it, they use these stolen credentials for criminal purposes all over the web. The information may have been stolen elsewhere, but if even a small handful of your customers get their accounts hacked or experience fraud on your site, it’s your company that loses the customer’s trust, and your brand reputation that is at risk.

The new reality that businesses need to accept is that a significant number of their customers have been victims, or soon will be. Because of this, there are important things businesses need to look out for to protect themselves. The trick is not to create a bad experience for customers in the process.

Keep an eye out for signs of account takeover.

Last year, 48% of online businesses saw an increase in account takeover (ATO), according to the Sift Science Fraud-Fighting Trends report. And the growing number of major breaches will only exacerbate this trend, potentially flooding the dark web with names, addresses, Social Security numbers, and other personal information that fraudsters can leverage to gain access to a legitimate user’s account. They then make purchases with a stored payment method or drain value from the user’s account.

Some of the signals that could point to an ATO:

  • Login attempts from different devices and locations
  • Switching to older browsers and operating systems
  • Buying more than usual, or higher priced items
  • Changing settings, shipping address, or passwords
  • Multiple failed login attempts
  • Suspicious device configurations, like proxy or VPN setups

Keep in mind that individually, each of these signs may be normal behavior for a particular user. It’s only when you apply behavioral analysis on a large scale, looking at all of a user’s activity and all activity of users across the network, that you can accurately detect ATO.

Monitor for fake accounts and synthetic identity fraud.

Fraudsters can also take all of the different pieces of personal data leaked in a breach to steal someone’s identity and create new accounts. They may also pick and choose pieces from various people’s accounts – like a birthday, Social Security number, and name – and mix them together to create an entirely new ID.

To keep tabs on fake accounts, you can monitor new signups to look for risky patterns, like a sudden spike in new accounts that can’t be attributed to a specific promotion or seasonal trend. If the average time it takes a new user to sign up suddenly gets much faster, that may point to fraudsters using a script to quickly create accounts. And seeing multiple new accounts coming from the same IP address or device is a red flag for a single person creating many accounts.

Stay focused on maintaining user trust.

Even if a breach doesn’t happen on your site, any downstream fraud attacks still happen on your watch. If you don’t invest in protecting your users from the devastating effects of ATO, identity theft, and fraud, you will soon lose their trust. Trust is earned in drops, but lost in buckets.

At the same time, e-commerce businesses and financial institutions should make sure they aren’t overly cautious to the point where they’re rejecting good customers and denying legitimate accounts. Preventing fraud is a delicate balancing act, and the right technology – which looks at a range of data points to make an accurate prediction about what is and isn’t fraudulent – can help you strike the right balance.

Fight technology with technology.

We are at a point where no one can afford to put their head in the sand when these breaches happen, and that includes marketing leaders. It’s time to develop a healthy paranoia and start operating from the point of view that every breach is going to affect you sooner or later, in some way or another. Get your house in order now, because breaches are going to keep happening. Prepare to fight technology with technology. Fraudsters are becoming increasingly good at pulling together large data sets to create ever more nuanced and sophisticated attacks. Businesses have to get out ahead of them with technology that also lets them leverage data and technology to create more nuanced and sophisticated authentication processes.

About the Author:

Jason Tan is the CEO of Sift Science, a trust platform that offers a full suite of fraud and abuse prevention products designed to attack every vector of online fraud for industries and businesses across the world.

Reducing Data Center Risk with Data Center Infrastructure Management (DCIM)

Mark Gaydos, Chief Marketing Officer, Nlyte Software

In 2017, data center failures around the world became big news. The British Airways outage in May, which caused the cancellation of over 400 flights and stranded 75,000 passengers, cost the company an estimated $112 million in refunds and compensation. This doesn’t take into account the cost of reputation damage, and the loss of productivity during the downtime.

It later came to light that this outage was caused by a simple mistake made by one person – an engineer working at Heathrow, who disconnected and reconnected a power supply. This restarting action caused a power surge which took down not only the primary data server site, but the backup site as well.

The British Airways incident is just one example of how fragile our IT and computing infrastructure can be. Depending on the statistics, human error is the culprit in 22%-38% of data center outages. Other top causes of downtime are circumstances such as UPS failure, heat or CRAC failure, weather issues and in some cases, generator failure.

The costs associated with data center downtime can rapidly accumulate to hundreds of thousands of dollars per incident, and more in the case of financial market outages. As data centers increase in complexity, and start to include more remote processing locations, the task of assuring uptime becomes more challenging with an increased degree of monitoring difficulty.

The good news is that most data center outages are preventable – especially if data center managers have better insight into operations which will improve reaction time.

A Data Center Infrastructure Management (DCIM) solution gives these managers the “better insight” by providing the visibility into all operations to significantly mitigate the risk of downtime.

Here are some examples of risks that can be easily reduced with a DCIM solution:

Overheating

A DCIM solution provides real-time temperature monitoring throughout a facility. This makes spotting hot spots in the computing infrastructure as simple as looking at a dashboard showing a real-time heat map. With this knowledge, any data center manager can rearrange equipment or load or simply adjust the speed of a fan, to remediate hot spots. In addition, DCIM solutions can identify opportunities for safe ambient temperature adjustments so the facility’s temperature can be raised without causing damage to IT equipment.

Power Overloading

The first step in protecting against power overload is not only knowing where power is being used, but how it might be used more safely and efficiently. DCIM’s real-time power monitoring and tracking can deter power overload. With alert features the right people are notified when a pre-set power limit is close to being reached, giving data center personnel ample time to react, make changes and shift the load before a major disaster strikes. And if, despite this foreknowledge, catastrophe does occur, a DCIM system can simplify disaster recovery.

Flawed Redundancy

Flawed redundancy relates to power failure. The ability to test the resiliency of the power chain is essential to good data center stewardship. A DCIM solution provides the ability to perform “what if” tests of the power chain, in a virtual environment, with no risk to the actual infrastructure. With this ability, a data center manager can test for situations and answer such questions as:

What if this piece of equipment were to suddenly fail?

Where would the load go?

What else might fail as a result?

Are my a and b sides safe?

Capacity

The biggest problem with capacity planning in a data center is: not knowing how much of the capacity is actually being used, and how much is left. A DCIM solution supplies not just power capacity intelligence, but also the physical space information as well. Moreover, it can provide information about how the physical capacity is being used, and how it might be used more efficiently, enabling consolidation of resources. The risk of running out of space or power is no longer an issue if you have a DCIM solution deployed. In addition, DCIM users have consolidated IT equipment to actually postpone or eliminate the need for multi-million dollar expansion projects.

Asset Management

Another data center risk has to do with asset management. The challenge is the ability to know what equipment is where. A DCIM solution not only keeps track of equipment throughout its useful life – providing information on where the asset is, what it is connected to and when it is moved, but also, it alerts the user when an asset has reached the end of its life and should be retired and replaced. This type of monitoring keeps the data center from having to support older equipment which has a higher risk of failure and becomes difficult and expensive to maintain.

Workflow

Here’s one data center risk that’s related to human error. A built-in workflow engine in a DCIM solution helps data center staff avoid errors by giving them a central repository of what work has been performed, by whom as well as what still needs to be accomplished.

Human Error

If we agree that people aren’t perfect and that they make mistakes, then we can agree that people might be the weakest link in the data center chain. But, with a DCIM solution in place data center teams have access to valuable information to prevent errors. A DCIM solution is a data repository for all data center staff to utilize and make more intelligent, informed decisions.

These are just a few examples of how a DCIM solution can help reduce risks and cut costs in a data center environment.

To find out more about reducing data center risk and how a DCIM solution can help, access this pre-recorded webinar. Hear 451 Research’s Rhonda Ascierto and Nlyte Software’s Mark Gaydos provide valuable examples on how to lower data center risks, OPEX and CAPEX.

Bio: Mark Gaydos is Chief Marketing Officer for Nlyte Software, the leading data center infrastructure management (DCIM) solution provider for seamlessly automating data center operations and infrastructure into an enterprise’s IT ecosystem.