Category: Featured Article

The Top 4 Traits of Top Performing MSPs

Key Findings from IT Glue’s Global MSP Benchmark Survey

By Joshua Oakes, Documentation Evangelist, IT Glue

The managed services business is reinventing itself, quickly. Companies are starting to realize the value of process and planning. More MSP owners, having been in the game a while, are starting to think more carefully about their exit strategies. In fact, even if you’re just starting out, you should be thinking about how to maximize the valuation of your business. It’s never too early to start building your equity.

For most MSP owners, when it comes time to retire or leave the business, there’s only a couple of viable options – sell the business, or wind it down. The latter option is problematic because all of the sweat equity the owner puts into the business is for naught. The former option is better, but there’s a problem here, too. Only around 20% of MSPs are sold. This makes sense – most MSPs are very small businesses, with their value deriving almost entirely from one or two key people. Buyers are looking for high-performing MSPs that aren’t reliant on key people, especially if those key people are exiting the business. It’s not easy to get into that top 20% of MSPs, but if you understand what those high performers look like, it becomes a lot easier.

So how do you get there? That Golden Quintile of MSPs that are attractive to prospective buyers – what do they look like? The results of IT Glue’s recent Global MSP Benchmark Survey provided us with some great insight into what the top 20% of performing MSPs actually looks like. Size doesn’t matter – great MSPs range from one-person shops to integrated companies large enough to target small enterprise clients. But there are some common traits that they all share:

High Margins

Some MSPs are earning amazing margins. Net margins of at least 20% are required to get you into the Golden Quintile. There are a couple of key implications to this figure. First, it means that the best-performing MSPs aren’t price cutting in order to win business. They are focusing on the value that they deliver to their clients, and charging fees in accordance with that value. They’ve built their entire sales model around being a premium player in the market. For example, when they talk to prospects, they don’t get sucked into a negotiation about price. Instead, they highlight how they will handle tickets quickly, because the value they bring lies in maintaining as close to 100% uptime as possible. Combine this pricing approach with cost control measures, and you’re on your way.

Rapid Growth

The best-performing MSPs not only earn high margins, but they are growing quickly as well. The top 20% of MSPs are earning growth rates of at least 10% compounded annually. There are three keys to sustained double digit growth.

  • Investment in sales and marketing
    • More than half of MSPs report struggling with sales, marketing or both. But investment in these areas is critical to lead generation and sustained growth.
  • Delivering on your promise
    • Selling great service is one thing, but if you deliver, you’ll gain customers who become evangelists. If lead gen is a pain point, these evangelists are critical for helping you attract new business.
  • Eliminating churn
    • Churn is evil – if you churn 10% of your customers every year you need to add 20% just to hit 10% net growth. Nuts to that. Deliver on your promises and you’ll go a long way to eliminating churn.

Process Orientation

According to Greg Abbott of Aabyss, a leading UK MSP, venture capitalists looking to buy MSPs will add anywhere from 5-15% for a turnkey business. If your business depends on you, the owner, and you are leaving when the sale has been completed, then you will not get the premium valuation you want for your business. You need to build a business that can thrive without you, and that means having a process orientation. First, you need to determine the best processes, perhaps by adopting lean methodology or other process improvement techniques. Second, you need to document your processes. If the buyer feels confident that past performance will be replicable without you, your MSP will be more attractive, and command a higher multiple.

Customer Focus

Not to be lost in all this is having a customer focus. If you truly want to deliver value, then you need to know what your customers value. Find out what their pain points are, and focus on the ways that you can mitigate or eliminate that pain. Having a strong customer focus increases the likelihood that you’ll have lower churn, and be able to earn higher margins while maintaining customer satisfaction.

Getting into the Golden Quintile definitely takes some work, but with a better sense of what the industry’s leaders are doing, it will be easier to get there yourself. IT Glue is a powerful IT documentation platform that contributes in many of these areas, especially delivering great service, optimizing your repeatable processes and lowering the cost of service delivery.

Bio: Joshua Oakes is the Documentation Evangelist for IT Glue, where he strives to produce thought-provoking pieces that help IT service providers improve their business, focusing on lean practices and the value chain.

GDPR – Comply or Pay High Fees

Mark Gaydos, Chief Marketing Officer, Nlyte Software

General Data Protection Regulation (GDPR) is Europe’s new data protection law that standardizes data protection across all 28 EU countries and imposes strict new rules on controlling and processing personally identifiable information (PII). The new mandate replaces the 1995 EU Data Protection Directive, supersedes the 1998 UK Data Protection Act and goes into effect on May 25, 2018. Organizations that are not compliant will be fined up to 4% of their global revenue. Simply put: GDPR extends the protection of personal data and data protection rights by giving control back to EU residents.

Time is running out for data centers to comply with GDPR rules for tracking the location of the data and transport from storage device, to server to the customer. No doubt, IT personnel know that the infrastructure’s physical security is as critical as the digital management of consumer data assets. But the IT physical infrastructure is not confined to the data center’s walls. For this reason GDPR compliance extends to colocation facilities, managed service providers, hosting services, SaaS vendors, and virtually any X-aaS vendor. To mitigate risks, organizations need visibility into their vendors’ IT framework to ensure the integrity of the consumer data they are responsible for.

What are the GDPR requirements? As reported by TechCrunch:

  • Anyone involved in processing EU consumer data, including third-party entities involved in processing data to provide a particular service, can be held liable for a breach.
  • When an individual no longer wants their data to be processed by a company, the data must be deleted, “provided that there are no legitimate grounds for retaining it.”
  • Companies must appoint a data protection officerif they process sensitive data on a large scale or collect information on many consumers (small and midsize enterprises are exempt, if data processing is not their core business).
  • Companies and organizations must notify the relevant national supervisory authority of serious data breachesas soon as possible.
  • Parental consent is required for children under a certain age to use social media(a specific age within a group ranging from ages 13 to 16 will be set by individual countries).
  • There will be a single supervisory authority for data protection complaintsaimed at streamlining compliance for businesses.
  • Individuals have a right to data portabilityto enable them to more easily transfer their personal data between services.

One way to expedite GDPR compliance is using a Data Center Infrastructure Management (DCIM) software solution. DCIM allows an organization to track the location of the data within the physical IT infrastructure, so they know if and when consumer data is transported cross-borders. This DCIM-enhanced, data transport visibility is critical for understanding:

  • Secondary locations of infrastructure for safe handling and transportation of data across borders.
  • The location of critical data as it moves across all network devices — regardless of location.
  • Expedited data breaches.
  • Exact geographic sites and locations of where the data is replicated.
  • All security tools that are deployed, enabled and residing on identified devices.

Since GDPR mandates meeting specific articles, organizations can fully rely on a DCIM software solution to meet the following articles:

  • Article 45 – Transfers on the Basis of an Adequacy Decision – Visibility into the entire lifecycle tracking – with accountability and compliance visibility and reporting.
  • Article 35 – Data Protection Impact Assessment – Workflow feature captures asset and application names while the system is operating or hosting data with the ability to assign a data protection officer’s review activity within any IMAC data center process. Using asset management and asset integrity monitoring in a DCIM allows for easy tracking of data at rest and the infrastructure used for that data. Furthermore, it provides a report of all workflows with a GDPR activity — whether they are active or closed.
  • Article 58 – Investigative Powers – The asset optimization and tracking support feature provides compulsory data protection audits when an organization needs to provide reports.
  • Article 17 – Right to be Forgotten (Right to Erasure) – The Asset Management feature allows controllers to flag/track the lifecycle of assets used for storage or data subjects processing – of all personal customer data. This tracking capability extends from the point of existence (in physical computer infrastructure) through decommissioning or destruction.  This type of visibility into a complete lifecycle record of the data’s physical location is critical to meeting the mandate.
  • Articles 59, 33, 33a – Activity Reports and Data Breach Notification to Authorities – Impact assessment report provides a list of flagged assets for GDPR tracking, providing assets’ location and status. This includes such critical information as mapped business application, data last audited, rack, name, IP address among others.

May 25, 2018 is almost here! Meet the GDPR compliance deadline and avoid hefty fines, put into place a GDPR compliance plan that includes a full-suite DCIM software solution.

Bio: Mark Gaydos is Chief Marketing Officer for Nlyte Software, the leading data center infrastructure management (DCIM) solution provider for seamlessly automating data center operations and infrastructure into an enterprise’s IT ecosystem.

What Businesses Need to Know in the Wake of a Major Data Breach

By Jason Tan, CEO, Sift Science

Online businesses everywhere are going to be dealing with the effects of data breaches in the post-Equifax breach era. It’s a tough truth to swallow, but these large-scale data breaches have become a fact of life – and it’s not just the breached business that pays the price. The reality is, even if your company wasn’t breached, you still have a huge challenge on your hands. As fraudsters mine the valuable data that’s been compromised, all e-commerce sites and financial institutions need to be on alert.

The downstream consequence of a major breach is that stolen information is sold on the dark web many times over. Since two-thirds of people use the same login information on multiple sites, when fraudsters get ahold of it, they use these stolen credentials for criminal purposes all over the web. The information may have been stolen elsewhere, but if even a small handful of your customers get their accounts hacked or experience fraud on your site, it’s your company that loses the customer’s trust, and your brand reputation that is at risk.

The new reality that businesses need to accept is that a significant number of their customers have been victims, or soon will be. Because of this, there are important things businesses need to look out for to protect themselves. The trick is not to create a bad experience for customers in the process.

Keep an eye out for signs of account takeover.

Last year, 48% of online businesses saw an increase in account takeover (ATO), according to the Sift Science Fraud-Fighting Trends report. And the growing number of major breaches will only exacerbate this trend, potentially flooding the dark web with names, addresses, Social Security numbers, and other personal information that fraudsters can leverage to gain access to a legitimate user’s account. They then make purchases with a stored payment method or drain value from the user’s account.

Some of the signals that could point to an ATO:

  • Login attempts from different devices and locations
  • Switching to older browsers and operating systems
  • Buying more than usual, or higher priced items
  • Changing settings, shipping address, or passwords
  • Multiple failed login attempts
  • Suspicious device configurations, like proxy or VPN setups

Keep in mind that individually, each of these signs may be normal behavior for a particular user. It’s only when you apply behavioral analysis on a large scale, looking at all of a user’s activity and all activity of users across the network, that you can accurately detect ATO.

Monitor for fake accounts and synthetic identity fraud.

Fraudsters can also take all of the different pieces of personal data leaked in a breach to steal someone’s identity and create new accounts. They may also pick and choose pieces from various people’s accounts – like a birthday, Social Security number, and name – and mix them together to create an entirely new ID.

To keep tabs on fake accounts, you can monitor new signups to look for risky patterns, like a sudden spike in new accounts that can’t be attributed to a specific promotion or seasonal trend. If the average time it takes a new user to sign up suddenly gets much faster, that may point to fraudsters using a script to quickly create accounts. And seeing multiple new accounts coming from the same IP address or device is a red flag for a single person creating many accounts.

Stay focused on maintaining user trust.

Even if a breach doesn’t happen on your site, any downstream fraud attacks still happen on your watch. If you don’t invest in protecting your users from the devastating effects of ATO, identity theft, and fraud, you will soon lose their trust. Trust is earned in drops, but lost in buckets.

At the same time, e-commerce businesses and financial institutions should make sure they aren’t overly cautious to the point where they’re rejecting good customers and denying legitimate accounts. Preventing fraud is a delicate balancing act, and the right technology – which looks at a range of data points to make an accurate prediction about what is and isn’t fraudulent – can help you strike the right balance.

Fight technology with technology.

We are at a point where no one can afford to put their head in the sand when these breaches happen, and that includes marketing leaders. It’s time to develop a healthy paranoia and start operating from the point of view that every breach is going to affect you sooner or later, in some way or another. Get your house in order now, because breaches are going to keep happening. Prepare to fight technology with technology. Fraudsters are becoming increasingly good at pulling together large data sets to create ever more nuanced and sophisticated attacks. Businesses have to get out ahead of them with technology that also lets them leverage data and technology to create more nuanced and sophisticated authentication processes.

About the Author:

Jason Tan is the CEO of Sift Science, a trust platform that offers a full suite of fraud and abuse prevention products designed to attack every vector of online fraud for industries and businesses across the world.

Reducing Data Center Risk with Data Center Infrastructure Management (DCIM)

Mark Gaydos, Chief Marketing Officer, Nlyte Software

In 2017, data center failures around the world became big news. The British Airways outage in May, which caused the cancellation of over 400 flights and stranded 75,000 passengers, cost the company an estimated $112 million in refunds and compensation. This doesn’t take into account the cost of reputation damage, and the loss of productivity during the downtime.

It later came to light that this outage was caused by a simple mistake made by one person – an engineer working at Heathrow, who disconnected and reconnected a power supply. This restarting action caused a power surge which took down not only the primary data server site, but the backup site as well.

The British Airways incident is just one example of how fragile our IT and computing infrastructure can be. Depending on the statistics, human error is the culprit in 22%-38% of data center outages. Other top causes of downtime are circumstances such as UPS failure, heat or CRAC failure, weather issues and in some cases, generator failure.

The costs associated with data center downtime can rapidly accumulate to hundreds of thousands of dollars per incident, and more in the case of financial market outages. As data centers increase in complexity, and start to include more remote processing locations, the task of assuring uptime becomes more challenging with an increased degree of monitoring difficulty.

The good news is that most data center outages are preventable – especially if data center managers have better insight into operations which will improve reaction time.

A Data Center Infrastructure Management (DCIM) solution gives these managers the “better insight” by providing the visibility into all operations to significantly mitigate the risk of downtime.

Here are some examples of risks that can be easily reduced with a DCIM solution:

Overheating

A DCIM solution provides real-time temperature monitoring throughout a facility. This makes spotting hot spots in the computing infrastructure as simple as looking at a dashboard showing a real-time heat map. With this knowledge, any data center manager can rearrange equipment or load or simply adjust the speed of a fan, to remediate hot spots. In addition, DCIM solutions can identify opportunities for safe ambient temperature adjustments so the facility’s temperature can be raised without causing damage to IT equipment.

Power Overloading

The first step in protecting against power overload is not only knowing where power is being used, but how it might be used more safely and efficiently. DCIM’s real-time power monitoring and tracking can deter power overload. With alert features the right people are notified when a pre-set power limit is close to being reached, giving data center personnel ample time to react, make changes and shift the load before a major disaster strikes. And if, despite this foreknowledge, catastrophe does occur, a DCIM system can simplify disaster recovery.

Flawed Redundancy

Flawed redundancy relates to power failure. The ability to test the resiliency of the power chain is essential to good data center stewardship. A DCIM solution provides the ability to perform “what if” tests of the power chain, in a virtual environment, with no risk to the actual infrastructure. With this ability, a data center manager can test for situations and answer such questions as:

What if this piece of equipment were to suddenly fail?

Where would the load go?

What else might fail as a result?

Are my a and b sides safe?

Capacity

The biggest problem with capacity planning in a data center is: not knowing how much of the capacity is actually being used, and how much is left. A DCIM solution supplies not just power capacity intelligence, but also the physical space information as well. Moreover, it can provide information about how the physical capacity is being used, and how it might be used more efficiently, enabling consolidation of resources. The risk of running out of space or power is no longer an issue if you have a DCIM solution deployed. In addition, DCIM users have consolidated IT equipment to actually postpone or eliminate the need for multi-million dollar expansion projects.

Asset Management

Another data center risk has to do with asset management. The challenge is the ability to know what equipment is where. A DCIM solution not only keeps track of equipment throughout its useful life – providing information on where the asset is, what it is connected to and when it is moved, but also, it alerts the user when an asset has reached the end of its life and should be retired and replaced. This type of monitoring keeps the data center from having to support older equipment which has a higher risk of failure and becomes difficult and expensive to maintain.

Workflow

Here’s one data center risk that’s related to human error. A built-in workflow engine in a DCIM solution helps data center staff avoid errors by giving them a central repository of what work has been performed, by whom as well as what still needs to be accomplished.

Human Error

If we agree that people aren’t perfect and that they make mistakes, then we can agree that people might be the weakest link in the data center chain. But, with a DCIM solution in place data center teams have access to valuable information to prevent errors. A DCIM solution is a data repository for all data center staff to utilize and make more intelligent, informed decisions.

These are just a few examples of how a DCIM solution can help reduce risks and cut costs in a data center environment.

To find out more about reducing data center risk and how a DCIM solution can help, access this pre-recorded webinar. Hear 451 Research’s Rhonda Ascierto and Nlyte Software’s Mark Gaydos provide valuable examples on how to lower data center risks, OPEX and CAPEX.

Bio: Mark Gaydos is Chief Marketing Officer for Nlyte Software, the leading data center infrastructure management (DCIM) solution provider for seamlessly automating data center operations and infrastructure into an enterprise’s IT ecosystem.

The impact of GDPR: What businesses should plan for in 2018

David Thomas, CEO of Evident ID

After a busy year of increasing data breaches and threats to personal data across the globe, a major data privacy protection reform effort from the European Union is barreling down the pipeline. It’s an important step forward for consumers’ rights and safety; however, companies around the globe now have the challenge of getting protective systems in place and must re-evaluate how they manage personal data. And the stakes for noncompliance are significant with reform becoming standard policy in just a few short months.

What is the GDPR?

The General Data Protection Regulation (GDPR) is an EU edict designed to improve the overall standard for data privacy while synchronizing data privacy laws across Europe. It will change how a wide range of businesses handle, hold, store and protect information. Its official and inflexible enforcement date is May 25, 2018, a mere four months away.

In addition to specific country requirements, businesses have to meet a minimum standard across all 28 EU member countries as part of the GDPR requirements. This standard is significant and will likely take a large investment to meet. One PWC survey showed 68% of companies expect to spend from $1 million to $10 million.

Who does it affect?

GDPR’s increased geographical scope is arguably the biggest change in European data privacy regulations. The new rules apply to all companies residing in any of the EU’s 28 member states as well as companies based outside of the member states that process and store personal data of EU citizens. Additionally, the regulation takes a wide view of what constitutes personal identification data – ranging from social media posts to an individual IP address.

Why is it important to me?

Noncompliance penalties for GDPR regulation are steep: up to €20 million or four percent of global annual turnover, whichever is higher. This marks a huge change in scale for potential penalties. For example, Facebook received a €1.2 million penalty in Spanish courts this past year for the sharing of profile information to advertisers. That type of information sharing will carry a much steeper fine once the regulatory change goes into effect. Businesses that fail to adhere to these new rules also expose themselves to class-action lawsuits from victims in and all 28 separate member countries, let alone damage to their brands and commercial reputations.

Four short months is not much time to understand the GDPR’s many moving parts and build out internal processes in order to reach compliance. And remember, it’s not just about meeting compliance by May. It’s also about creating a system that supports sustainable compliance. GDPR is the new standard and there’s no going back.

How hard could it be?

Several requirements will challenge your security team, but we wanted to highlight three important components that could require major operational overhauls:

  1. Stronger consent conditions
    Companies are allowed to store and process personal data for a specific use case only when an individual consents. According to the EU’s GDPR website, the request for consent “must be given in an intelligible and easily accessible form.” And once a company is permissioned to use an individual’s data it must only be used for the purpose as defined when the initial consent was given, and if the person no longer wishes to engage with the company for the initial intended purpose, their personal data must be removed from the appropriate systems.
  1. Mandatory breach notification
    As stated on the EU’s GDPR website, companies must report a data breach to supervisory authorities of each EU country within 72 hours of when said breach was detected. Individuals affected also must receive notification “without undue delay.”
  1. Privacy by design
    Businesses are now legally obligated to build data protection into information management systems from the outset rather than treat security as an addition. Patchwork fixes will no longer cut it.

Only time will tell how businesses respond to this watershed moment in data security. 2018 will be a year of changes across the cybersecurity landscape starting with this critical shift in regulatory requirements for companies. The sooner companies start to evolve their security management protocols, the safer both their customers and their businesses will be.

Bio: David Thomas is the CEO at Evident. He is an accomplished cybersecurity entrepreneur, having held key leadership roles at market pioneers Motorola, AirDefense, VeriSign, and SecureIT. He has a history of introducing innovative technologies, establishing them in the market, and driving growth – with each early-stage company emerging as the market leader.

Dropbox Dropping AWS: What Gives?

By Emil Sayegh, CEO & President of Hostway

Here we go again. Towards the end of 2017, we saw another company announcing leaving AWS as Dropbox has officially packed up their hundreds of petabytes of data in favor of rolling out their own custom infrastructure. The story of why a highly successful, high value “born in the cloud” company left AWS after eight years is as interesting as the significant technical achievement behind it all. It’s ironic that when Dropbox needed more, the AWS public cloud just wasn’t enough.

Dropbox’s business needs might be familiar to many in the technology business as they identified technology goals such as reducing costs, increasing speed, gaining more control, and a better experience for its users. The fact that as much as 75 percent of the company’s users are outside the United States presented a need to institute a shift in technology. They have responded by progressively taking their services as close to “the edge” as possible. The AWS public cloud could simply not provide the control or affordability to scale the way the company needed.

It is hard to reconcile this story with the massive traction that we saw at AWS re:INVENT, the Amazon Web Services annual conference with a record-breaking attendance of nearly 50,000 people, showing the tremendous interest in all the advantages that the AWS cloud has to offer. So, is this Dropbox move a flash in the pan or a sign of a different trend?

AWS Exodus or Multi-Cloud Influx?

We’re not all in the Dropbox line of business, but there are lessons in this departure that apply to many others. For example, in recent months, other companies have also signaled a shift away from AWS for similar reasons. For every one of these companies moving away, there are dozens that are also going into public cloud. In 2016, Spotify announced it would pack and leave for Google’s cloud; Target recently announced that it would pull in its infrastructure (again); Apple has reined in at least part of its vast services infrastructure from AWS; and even the mega-retailer Wal-Mart is in the midst of a death match with AWS, attempting to push its partners to depart AWS. It is interesting that for companies of this size with all their capital resources, AWS isn’t meeting their needs; in other cases, AWS service itself is adding up expenses far too fast.

AWS limitations repeatedly come down to some main points:

  1. The cost of platform-locked specialized engineering – Vendor lock-in can be found in the platform and the engineering required to use AWS. There is a high degree of proprietary knowledge that comes with AWS territory. Developers, architects and even webmasters must know this platform from the ground-up. They can even get certified in as many as six AWS-only specialties. AWS can become a complete and ever-lasting buy-in.
  2. Limited control – If you have specialized needs such as getting your compute to the edge, increased flexibility, exact server specifications, auditing or visibility to the root access on a system, you can pretty much forget about it from a practical sense. This is a shared cloud and those are things that the public will never get.
  3. Point of failure – CIOs and business don’t exactly enjoy having to explain outages to frustrated users. That was exactly what happened when a major-scale AWS total loss of services affected customers in February 2017. The lesson learned is this: Just because it’s built in a cloud doesn’t mean it is automatically resilient under any realistic definition. The minimal risk play here is that applications and infrastructure must be architected throughout multiple regions. All of that comes at a cost of course, making simplicity in architecture and application all the more important.
  4. Unexpected costs – Many find that the cloud expenses they plan on aren’t the expenses they wind up with. The costs of traffic, storage and other features add up quickly. Public cloud services can be a geometrically expanding expense – the more you put in and the higher the adoption, the more unpleasantly surprising that utilization bill is.

It is ironic that the marketing notion of “only paying for what you use” monthly pay structures and the promise of no-contracts can become an expensive proposition that can be very difficult to get out of. These flaws are not fatal in any way; AWS is growing at astronomical rates. But the answer is clearly in managing AWS and public clouds correctly and architecting them, along with other tools, the right way. We can see that the new world coming upon us is a hybrid, multi-cloud world where multiple IT infrastructures will coexist. Certainly AWS, Azure and Google Cloud will continue to dominate the landscape for a long time, however, we are seeing the advent of multi-cloud architectures whereby these platforms are viewed as tools in the tool box, and not a single destination.

The Destiny of Cloud Choice

It is not uncommon that many successful and smart companies get to a stage where they find it advantageous to move from AWS towards another solution, or mix AWS with other solutions. It is not always an either-or choice. The AWS value re-evaluation point is inevitable in case after case, as is the re-evaluation of any IT infrastructure.

So, what exactly is going on? Well, for years, the industry has talked about a number of compelling points surrounding a flexible hybrid multi-cloud. This includes the promises of:

  • Portable workloads
  • Configurable performance architecture
  • Security capabilities
  • Cost advantages

This is exactly what we’re seeing in the realization that the canned confines of a single public cloud are not the optimal platform for these needs.

So, what is the answer? Leave AWS and hyperscale cloud all together? Is it running your own infrastructure? Do we go back to the old ways of doing things?

No, not really. Not everyone can build their own global cloud environment like Dropbox has. And certainly, there’s no going back to the old days.

Meet the Third-Generation Cloud (a.k.a. Hybrid Multi-Cloud)

Current and next-gen businesses do have an alternative and it is very early in the game. You may find it surprising that the overwhelming majority of enterprise workloads remain off of the cloud. The reason behind this is that after all this time, a single public cloud hasn’t delivered all that a business needs from legacy workload management, to ease of use, to cost effectiveness. That changes with managed multi-clouds and hybrid clouds. Only a managed hybrid cloud can retain the benefits of public clouds in combination with the benefits of bare metal resources.

Managed hybrid clouds and multi-clouds provide:

  • Full control over systems
  • Better affordability
  • Portable right-size workloads
  • Non-proprietary knowledge requirements
  • Significant cost advantages
  • Better support

Hybrid allows organizations to leverage dedicated infrastructure with multi-cloud hyperscale cloud capabilities seamlessly. Organizations have also discovered they can gain even more advantages and portability by using container formats like Kubernetes, Docker, Microservices and a variety of cross-cloud management tools.

As we look ahead to the next generation of applications and enhanced capabilities of popular information systems, artificial intelligence and big data are a big part of our future. The AWS cloud by itself is simply a non-starter to address alone all the needs. Managed hybrid cloud solutions are here, now to help with the transition to the public cloud. It is becoming the de facto pathway as evidenced by the huge MSP ecosystem created around AWS and Azure. If you are considering a cloud solution, and you require consistent costs and the ultimate in capabilities, a managed multi-cloud solution could be the answer.

The Consulting MSP – How to Profit from Specialization

By Caroline Paine, Director of MSP Sales at OnApp

MSPs field all kinds of requests for a range of cloud services from their clients and new clients alike. Those that specialize and find their niche can be much more successful than those who try to be all things to all people. AWS, Digital Ocean, GCE and other mega-cloud providers have spent years building up infrastructure that offers point-and-click access to compute or storage services for all levels of consumer. However, they don’t have a consultative approach to providing solutions. This is where competitive – and most importantly, competent – MSPs can shine!

By understanding the needs of specific business types and catering to them, a regional MSP can win customers and thrive. In this article, we’ll look at what it takes to be a consulting MSP, which challenges must be overcome, and how a cloud management platform can help.

The Consulting MSP

For most MSPs, trying to be all things to all customers is becoming obsolete. It’s better to serve a specific market niche. Certainly, there are plenty of niches to be claimed: offering an “IT department on demand” service for small and medium-sized businesses; focusing on managed hosting for banks and financial services firms, healthcare organizations, or oil and gas; or specializing in specific applications or professional services. These are all valid niches.

Understanding the compute needs of a chosen niche is the key to a consultative relationship. Rather than simply providing servers and storage, the MSP should sit down with the customer and plan out resource needs through busy and slack times.

For example, if we look at banks and think about how they’re going to be using their capacity, every night a bit of FinTech software needs to do a lot of processing – so compute resources should be increased during overnight hours. If we look at healthcare, the winter months have higher capacity requirements due to higher incidents of illness, meaning more patient records need accessing. E-commerce businesses get much busier around Black Friday, Cyber Monday, Boxing Day, and other big sales days, and they need higher capacity to handle huge spikes in traffic.

Customer-driven provisioning challenges

By anticipating these service requirements and provisioning for them automatically, an MSP can keep customers very happy. In order to do that, MSPs need the ability to provide compute and storage resources on demand. They shouldn’t force their customers to request a different level of service and sign a new contract. Rather, they should enable flexible, customer-driven provisioning (by the customer) with contract language that specifies a standard level of service and a “burst” level of service for a specific added charge.

How can MSPs provide this? There are several key challenges.

Simplifying management – it’s difficult to enable customer-driven provisioning without an orchestration layer or a cloud management solution in place. Without such a solution in place, the process becomes quite manual – the IT managers must physically check on available resources in various places throughout the data center, and that takes time, much like a physical stock check in a supermarket. But, if the MSP is using a cloud management platform, it can provide a service where the customer can see what’s available in terms of compute resources and consume more themselves.

Avoiding over-provisioning – an MSP that wants to be ready for anything has to maintain a large supply of idle resources that can be turned into capacity as customers need it. This requires a large up-front investment. To avoid over-provisioning, the MSP needs to sit down and consult with its customers to create an action plan for capacity scaling on demand, and making sure that infrastructure is in place for customers’ future needs. A cloud management platform helps by revealing exactly how much compute, storage and networking power is available at any given time.

Moving away from fixed contracts – rather than having a fixed contract for a standard level of services, the MSP should be flexible and offer resource bursting. The customers pay for a minimum commitment of resources and have an overage charge for bursting capacity. This gives them the freedom to expand quickly without going back to the MSP, and gives the MSP the security of extra resources being measured and chargeable.

Requirements for Success

What infrastructure attributes make it possible to rapidly and cost-effectively become a consultative MSP? There are several key requirements.

Centralization

To deliver the responsiveness and flexibility that keeps customers happy, MSPs need a centralized management infrastructure that allows resource planning and allocation through a unified interface. Rather than having different systems tracking for compute or storage or network supply, the management tool could allow central control over the entire infrastructure. This is essential for understanding and assigning resources.

Adaptability

With flexible, automated resource management, the MSP can rapidly adapt to its customers’ unique needs. That might mean enabling deployments in multiple locations across a region, or flexibility when it comes to contracts and billing. For example, the infrastructure management platform could handle scaling automatically – it could watch to see that if a customer’s usage reaches, let’s say, 90 percent of capacity for a certain period of time and automatically provision more resources. When the customer’s usage drops below, say, 30 percent, excess capacity is removed. The MSP saves time and money by not having to manually adjust resources, and the customer wins because they only pay for what’s being used at any given time.

Where the workload fits best

Multi-hypervisor support is also essential. The MSP should help the customer place each workload with the right technology. With a cloud management platform that supports multiple hypervisors, an MSP doesn’t have to be all KVM or VMware anymore; it can use the best tool at the right cost for the job.

The Benefits of a Cloud Management Platform

For a consultative provider, having a cloud management platform delivers the agility needed to respond quickly to customer requests along with the automation needed to control staff expenditures. There are several specific benefits to be gained.

Provisioning efficiency: Cloud provisioning is software-driven; it requires minimal amounts of staff to perform the operation. Rather than racking new servers as customer needs change, an MSP can carve out new resources from existing infrastructure and provision them on the fly. To offer services, MSPs need a cloud platform with the ability to orchestrate across a range of hypervisors – and to achieve peak efficiency, they also need to be able to manage these centrally. By being able to see all physical servers, firewalls, storage and virtual servers in one place, it’s easier to react to customer needs and issues as they arise.

Administration efficiency: A cloud management platform should minimize manual effort at every point in the customer lifecycle. With the right platform, properly-trained personnel, and some consulting from the chosen vendor, technicians should be able to provision a new customer in hours rather than weeks.

Vital to this is the need to be able to create permission-based user roles and user groups so that, once the service is in production, clients can self-serve resources within a secure framework – access control gives the customer only what they should be able to access. The cloud management platform should also leverage customer needs and profile these into ‘templates.’ Once a template is created for one style of deployment, it can be easily modified to onboard a second customer with similar needs, and so on. Having a central template repository makes provisioning easier and faster for administrators and also reduces provisioning errors.

Billing efficiency: Leveraging a solution that also intricately calculates resources for billing by customer is another element that will save hours of manual work, and improve margins quickly. Getting this right is crucial for rapid growth.

Resource efficiency: With the ability to treat the entire compute, network, and storage infrastructure as a flexible pool of resources, MSPs can easily assign specific resources to specific clients and bill for them accordingly, eliminating custom racking and stacking for individual clients. What’s more, the MSP can replicate one customer’s setup for the next customer, and simply tweak the resource allocations or service mix to suit the new customer.

MSPs can thrive if they offer a consultative service to customers, specialize in a particular market, and enable customers to quickly and cost-effectively meet their service needs. A cloud management platform simplifies resource provisioning, billing, and infrastructure management so MSPs can successfully serve their customers without breaking the bank.

Bio: Caroline Paine is director of MSP sales at OnApp. She brings together solutions to overcome the challenges that MSPs, telcos and other service providers face. She joined OnApp as a start-up in 2010, and has since helped thousands of MSPs, telcos and datacenter operators create new revenue opportunities, and stay profitable and competitive in a fast-changing market.

Small Businesses Need Dark Web Monitoring for Today’s Cybersecurity Risk

By Kevin Lancaster, CEO at ID Agent

According to a comprehensive analysis of security breaches conducted by Gemalto, last year saw almost 1.4 billion data records lost or stolen. Juniper Research has the global cost of data breaches reaching $2.1 trillion by 2019. The numbers are alarming, and while 2017 statistics are still unfolding, we already know there is great cause for concern. With modern malware, new-age ransomware and the Bitcoin process, the IT market has advanced in complexity with more sophisticated cybersecurity threats which are increasing in both frequency and scale.

The NotPetya ransomware campaign cost pharmaceutical giant Merck more than $300 million in Q3 2017, and sources suggest that Merck will be hit with that amount again in Q4. Mondelez International, maker of Oreo cookies and Cadbury chocolates, estimates the Petya malware attack shaved three percentage points from their Q2 sales growth, due to interruptions in shipping and invoicing.

The magnitude of some of these attacks is astounding, but many large corporations have the resources to survive the disruptions experienced at the hands of these criminal activities. It is small businesses, however, that are feeling the impact the hardest when a cyberattack occurs due to lack of preparedness, resources and confidence in the ability to stop an attack. Some estimates say that as many as one third of small-to-medium-sized businesses were hit by ransomware in 2016, forcing many of them to halt operations completely.

Cybercriminals have learned that smaller attacks can be replicated easily and carried out against multiple companies – including small and medium sized businesses – for greater revenue. It only takes a small number of successful attacks to yield substantial revenue – and incentive.

With the evolution of today’s attacks, companies of all sizes need to be vigilant and place a higher priority on protecting their employees and their corporate networks and systems. And small businesses are relying on their MSPs. Are you offering the latest protective services that your clients need to protect their networks and systems? Are you prepared for a client cyberattack?

Not long ago, selling cybersecurity services to clients meant offering simple monitoring and patching services. The significant ransomware threat and Bitcoin was not even around a few years ago when you may have signed some contracts with clients. The market has changed substantially – so your services as well as what you are charging – need to change as well.

Don’t be afraid to talk about increased pricing. As an MSP, you are protecting your clients’ most valuable assets, assuming much risk in securing networks and systems, and you need to be compensated. There is substantial value in these services – more value today than ever before. MSPs are providing more services – user awareness training, active endpoint protection and more. In fact, if you aren’t charging enough for protective services, your clients may question why and look to others who may be offering seemingly better services.

Fundamental cybersecurity best practices include backing up data regularly, keeping software up-to-date and staying on top of the common tactics used to spread ransomware. Today’s MSPs should also be providing Dark Web monitoring services – solutions that scour millions of sources including botnets, criminal chat rooms, peer-to-peer networks, malicious websites and blogs, bulletin boards, illegal black market sites and other private and public forums – to identify and monitor for an organization’s compromised or stolen employee and customer data. Dark Web monitoring services are allowing IT service providers, MSPs and MSSPs to educate their clients about the high risk of the Dark Web and protect them from the dramatic rise in credential-based exploits.

The Dark Web, the large portion of the Internet that is hidden from conventional search engines, holding a wealth of stolen data and illegal activity, must not be overlooked in an up-to-date security plan. As well, delivering affordable, add-on services with 24/7/365 alerting and monitoring for signs of compromised credentials, allows MSPs to quickly and cost-effectively increase monthly recurring revenue, customer stickiness, dependence and satisfaction as well as attract and retain new customers.

Personally identifiable information (PII) – names, email addresses, passwords, dates of birth and IP addresses – are being stolen at alarming rates. Hackers, including nation states, organized crime, hacktivists, malicious insiders and motivated individuals, are using our PII to successfully access and steal our money in a variety of ways. While cyber breaches are no secret, many don’t realize that organizations and individuals are highly vulnerable to exposure of PII on the Dark Web, lending high vulnerability to corporate systems.

Most small and medium sized businesses don’t have the knowledge or resources to protect themselves against the sophisticated attacks looming today. As the MSP, ensure your clients are protected against today’s inevitable threats and be prepared when they strike by offering the latest, most comprehensive, protective security services, including Dark Web monitoring.

7 Key Problems Must Be Addressed For The Sharing Economy To Thrive

David Thomas, CEO of Evident ID

Consumers embrace the sharing economy for the freedom of choice it offers; using a digital platform means consumers are no longer tied into big conglomerations, but instead deal directly with peer-providers. The sharing economy also gives consumers power by allowing them to participate in a rating system that influences providers’ reputation ratings. Coupled with this additional freedom and control is an inherent level of risk. While digital platforms have taken some measures to alleviate that risk, the most effective risk-mitigation factor would be the requirement of a verifiable online identity for participants in the sharing economy. However, before verifiable online identity can truly take hold in the sharing economy landscape, there are barriers that must be addressed:

Online identity verification creates friction. The sharing economy’s appeal lies in its ability to camouflage itself into daily life. To be effective, a good online identity system needs to be invisible. Lengthy or intrusive onboarding will stop a would-be participant experience in its tracks.[1] Seamless onboarding, on the other hand, drives home real, quantifiable results: “companies that focus on providing a superior and low effort experience across their customer journeys . . .realized positive business results, including a 10-15 percent increase in revenue growth and a 20 percent increase in customer satisfaction.”[2] As verifiable online identity finds its place in the sharing economy, it must be driven by a seamless, consumer-friendly onboarding process. Otherwise, the process will simply lead to abandonment during the onboarding process, which is counterproductive for platforms, providers, and consumers.

  1. No single definition of online identity verification exists.

Most people’s comfort level extends to offering their identification at banks or airports as a security measure.[3] The same does not hold true for online environments. While traditional identity verification is a relatively standard process, online identity verification is definitely not. Users currently don’t know what information is necessary for verification purposes or how that information enhances their online security.

Currently, the onus falls on each sharing economy platform to decide what components to include in the identity verification process; they then are faced with determining which components need to be outsourced and which can be developed internally, weighing the risks to find the appropriate cost versus performance balance.

  1. Identity verification processes change by the moment.

In the dynamic sharing economy, fresh players and segments continue to appear almost daily. Identity credentials constantly change. The challenge becomes creating practices and guidelines that remain current and protect against increasingly sophisticated ways to forge identity. In this dynamic environment, the refrain becomes “is this the most up-to-date data?” Trying to stay on top of these changing processes can drain resources—employee and financial.

  1. Online identity requires a tremendous amount of personal data.

Personal data often includes a person’s full name, Social Security Number or credit information, as well as a secondary source like date and place of birth or mother’s maiden name. Currently, to many consumers’ dismay, online identity verification measures require a good deal of personal data. Companies collecting data face a host of unnecessary risks and potential liabilities. Although security measures are evolving, securing personal data against hacking—which could cause serious damage to the consumer and to the company’s reputation—remains a significant concern.

  1. Identity verification actually reveals very little about a person’s qualifications.

Information currently collected to establish identity doesn’t tell the consumer anything about the provider’s qualifications—like professional licensure or certificates. Currently, review based scores serve to establish peer-to-peer trust. But those scores can be easy to manipulate. For the identity verification process to be truly useful, it needs to be differentiated based on the sensitivity of the service being offered. For instance, accepting a ride from a Lyft driver requires one level of identity verification—is the driver who she says she is and does she have a clean driving record? Scheduling an appointment with an online health professional requires another level of identity verification entirely. Now the user needs to be able to easily ascertain not only the provider’s identity but also that they are a licensed, credentialed medical professional.

  1. Online identity verification is complex and requires constant re-verification.

Companies working to grow a vibrant user base must constantly show proof of verification of identity for an ever-shifting user and consumer base. Unfortunately, verifying identity isn’t a one-time process. Identity verification must be established and re-established on recurring basis. These efforts often require large amounts of time and financial resources and leave companies vulnerable to events that could seriously damage their brand-reputation.

  1. With the sharing economy positioned to double in size every year, online security and privacy regulations are becoming more complex and thorough, particularly in industries that require large amounts of personal data.[4]

Fundamentals like full name, date of birth, and home address must already be verified. Beyond the fundamentals exists a wide variance in the number and type of data sources engaged to confirm someone’s identity.[5] The rapid growth of digital marketplaces means that companies must be up to date with a wide range of security and privacy regulations, including legal compliance with HIPAA, FCRA, CIP, and the Patriot Act . The constant push to obtain personal data for verification unnerves even the most web savvy customers and providers. Nagging questions about who is storing the data, how it will be used, and—ultimately—by whom make users skittish. Additionally, the constant ask for personal data produces friction in what should be a seamless process. Friction pulls users out of the instantaneous experience that they seek on a digital platform. And friction causes drop-off. Meanwhile, companies managing sharing economy platforms must wrangle with crucial decisions about collecting, maintain, and verifying online identity for their users. These decisions could well impact the viability of their company—and the security of thousands of their users.

  1. Current personal data practices create an environment that breeds internal threats.

Companies can, and do, invest billions of dollars in keeping personal data safe. However, even the best technology cannot eradicate the threat that stems from employees having access to users’ sensitive personal data. Employees assigned to process background information for users and providers have open access to personal data. The best encryption system doesn’t block against the employees who have access to personal data as part of normal course of business. Currently, organizations simply must trust that these employees won’t be swayed by bad-actors who offer monetary reward for the distribution of personal data. Even intelligence agencies haven’t been able to solve this issue—as evidenced by multiple major releases of information by insiders.

Although challenging, these seven key problems are certainly not insurmountable. What is required is a fresh perspective on managing personal identity, so that the sharing economy can realize its full potential.

[1] Key Trend in Online Identity Verification Econsultancy, May 2016

[2] Want A Powerful Customer Experience? Make It Easy For The Customer B. Morgan, January 2015

[3] The Challenges of Non-Face-to-Face Identity Verification Trulioo, June 2016

[4] Sharing is the New Buying Crowd Companies, 2014

[5] The Challenges of Non-Face-to-Face Identity Verification Trulioo, June 2016

Bio: David Thomas is the CEO at Evident. He is an accomplished cybersecurity entrepreneur, having held key leadership roles at market pioneers Motorola, AirDefense, VeriSign, and SecureIT. He has a history of introducing innovative technologies, establishing them in the market, and driving growth – with each early-stage company emerging as the market leader.