Home » Featured Article » Page 2

Category: Featured Article

Unmanned Edge Operations Are the Future

By Michael C. Skurla, Chief Technology Officer, BitBox USA

The growth of edge is an interesting phenomenon. The rise of edge computing closed the IT infrastructure gap with edge data center deployments. The rise of public cloud and centralized computing paved the way to hybrid cloud and decentralized computing. However, within a distributed infrastructure, the IT ecosystem demands a mix of telecom and web services.

Whether on-premise, or closer to end-users, edge computing complements the current public cloud or colocation deployments.

The increased demand for connectivity-driving data proliferation positions IoT’s critical role as an edge enabler. But adding more “client” devices to networks isn’t the only role of IoT within an edge ecosystem. The often-overlooked side is for the required IoT technology to enable edge operations.

While cloud computing shifted the data center to a third-party network operations center (NOC), it didn’t eliminate on-premise data center operators who manage and respond to facility problems. Edge introduced a new challenge to network operations: autonomous management with limited access to the individuals who are local to equipment to address problems or perform maintenance. The new norm does not have in-house IT staff, equipment and machines under one or several roofs. It distributes data center operations into thousands of smaller facilities, most of which are not readily accessible in a short drive or walk.

Describing the edge as, “the infrastructure topology that supports the IoT applications,” Jeffrey Fidacaro, Senior Analyst for 451 Research Data Centers, underscores the importance of building a “unified edge/IoT strategy” that taps into multiple infrastructure options to manage the onslaught of IoT and facility systems while dealing with the needs of constant change.

Interestingly, the platforms around IoT solutions, not the hardware itself, are the answer to this quandary. Based on IT standards, IoT sensing and monitoring hardware offers granular, a la carte-style monitoring solutions. These solutions are often easy-to-install, flexible form-factor hardware packages that equip small sites, from shelters down to small electrical enclosures. Since these devices offer a multitude of functions and data points, they make reliable and remote facility management possible.

For instance, the sensing technology of ServersCheck allows granular site data to be generated from hardware, which complements an IoT platform that allows large amounts of sites to be monitored in concert while also tying in more complex control sub-systems such as HVAC, generators, access control, and surveillance equipment. These IoT platforms expand monitoring and remote management to a global scale, allowing customized alarming, reporting, dashboarding, and more, for a geographically distributed portfolio of locations.

This style of IoT management solution allows a flexible, customized design for each site. Its scalable infrastructure reduces the need for NOCs to monitor multiple separate software packages to determine conditions at each site. This facilitates rapid remote diagnostics and a triage of problems before dispatching staff to remedy issues.

Edging to Cellular Levels

Telecommunications keeps pushing further to the edge. In particular, remote monitoring is more crucial than ever, with the planned 5G rollout that ensures rapid growth of small-cell technology piggybacking on shared infrastructure such as streetlights, utility poles, and existing buildings.

As wireless transmitters and receivers, small-cell technology design allows network coverage to smaller sites and areas. Compared to the tall cell towers enabling strong network signals across vast distances, small cells are ideal for improving the cellular connectivity of end-users in densely developed areas. They play a crucial role in addressing increased data demands in centralized locations.

The rapid scalability of small cell technology can not only meet the demands of 4G networks, but can also easily adapt to 5G rollouts to expedite connectivity functions closer to the end-users. In clustered areas, small-cell technology allows for far superior connectivity, penetrating dense areas, and in-building sites.

Consider small-cell technology as the backbone of the fourth industrial revolution. Enabling the transmission of signals for transmitting even greater amounts of data at higher speeds, small-cell technology empowers IoT devices to receive and transmit far greater amounts of data. It also enables 5G technology, given the density requirements of the technology.

Enterprises face a flood of data from IoT connectivity. In fact, Cisco estimates this data flood to reach 850 zettabytes by 2021. This is driving edge buildouts of all sizes and shapes. To accomplish this, edge operators must rethink how they manage and monitor this explosion of sites. IoT platforms have proven to have the scalability and flexibility to take on this challenge in a highly affordable way.

As Forrester research predicted, “the variety of IoT software platforms has continued to grow and evolve to complement the cloud giants’ foundation IoT capabilities rather than compete with them” and it expects the IoT market to continue to see dramatic and rapid change in coming years.

It’s time for the technology that edge is being built to support – IoT – to play a role in managing the critical infrastructure that enables it. IoT platforms can tie the knot for this marriage.

Bio

Michael C. Skurla is the Chief Technology Officer for BitBox USA, providers of the BitBox IoT platform for multi-site, distributed facilities’ operational intelligence, based in Nashville, Tennessee. Mike’s in-depth industry knowledge in control automation and IoT product design sets cutting-edge product strategy for the company’s award-winning IoT platform.

2020 Cloud Market Predictions: The Future Looks Bright

2020 Cloud Market Predictions: The Future Looks Bright

By Mark Kirstein, Vice President, Products at BitTitan

For many people, the New Year is a time for reflection on the year gone by and an opportunity for renewed commitment to progress and goals. The same is true for businesses. As we embark on a new year and a new decade, many businesses are trying to anticipate where the market is headed so they can make strategic plans that will result in success.

Many things could influence market conditions around the world this year, from the 2020 Olympics in Tokyo, to the U.S. – China trade war, to the U.S. presidential election. While the U.S. surplus in exported services is shrinking overall, this trend is not expected to have a negative impact on the cloud services sector. Read on for our top six predictions for the cloud market in 2020.

  1. SaaS growth will continue

Currently, the cloud is a $200 billion market, yet overall IT spending is in the trillions of dollars. This means that spending for on-premises (on-prem) software and services remains strong. Is this a bad sign for the cloud market? Absolutely not. We anticipate the global cloud services market for 2020 to continue to grow in excess of 20 percent. Many organizations are moving to the cloud in stages and there are several factors that will keep migration in forward motion. These include increased confidence in and reliance on cloud services, the phase-out of on-prem software like Microsoft Exchange 2010, and continued aging of hardware and infrastructure. While we expect most companies to make conservative spending decisions in 2020, decisions related to the cloud are fundamental to operations, particularly for global companies, and not as likely to be put on the back burner. We will see continued innovation of SaaS services and offerings, coupled with organizations migrating closer to an “all-in” adoption of the cloud. There is a lot of opportunity ahead for SaaS.

  1. Cloud-to-cloud migrations will continue to rise

While companies are continuing to migrate from on-prem to the cloud, we expect to see a continued uptick in cloud-to-cloud migrations as more companies devote attention to optimizing their cloud footprint. Currently, a majority of BitTitan’s business is cloud-to-cloud migrations. The historical concerns of cloud security, reliability, quality, and SaaS-feature parity have largely been addressed, but companies are continually searching for the provider that can deliver the most value for their IT dollars. Businesses want the ability to move their data while avoiding the perils of vendor lock-in. Furthermore, maintaining a multi-cloud environment allows companies to better manage business risks.

  1. The use of containers will increase

Containerization, which packages up software code and all its dependencies so the application runs quickly and reliably and can be moved from one computing environment to another, has achieved mainstream adoption and will continue to be a strong market segment in 2020. Containers offer a great deal of flexibility and reduce the risks for companies moving to the cloud. They reduce infrastructure costs, accelerate and simplify the development process, result in higher quality and reliability, and reduce complexity for deployments. Containers also aid in cloud-to-cloud migrations. Businesses that use containers can easily run them on Google Cloud today and switch to other platforms like Azure or Amazon Web Services (AWS) tomorrow without complex reconfiguration and testing. This allows businesses the freedom to shop for the right cloud environment. This is one of the reasons the container market is growing at a rate of more than 40 percent, and we expect that growth will continue.

  1. Microsoft and Google will seize market share from AWS

Of the top three public cloud providers today, AWS was first to market and has enjoyed a considerable lead in market share. AWS has been particularly appealing for companies that want to provide “born in the cloud” services. But in 2020, we expect the two other top public cloud vendors – Microsoft Azure and Google Cloud – to make significant inroads and take market share away from AWS. Part of this is simple math: With such a big slice of the market, it will be hard for AWS to maintain its rate of growth. And the competition is getting stiffer. Microsoft is doing a great job of appealing to enterprises who are grappling with legacy infrastructure. Google also is making significant investments in its cloud computing unit. Its technology is already very good and easy to use, which will make Google a force to be reckoned with. Another trend we are likely to see is that smaller public cloud vendors will drop out or choose to focus their business on the private cloud infrastructure market, where they are more likely to excel.

  1. The market will expand and consolidate

As the cloud market grows, the ecosystem will expand with the types of solutions and capabilities to manage and streamline, increasing the value of investments in the cloud. On average, companies using cloud technologies are using five different cloud platforms. We will continue to see new and improved offerings to help companies assess, monitor, and manage their cloud footprints to reduce costs and improve security. As new, compelling cloud solutions enter the market, we are likely to see more consolidation, with Amazon, Microsoft and Google continuing to acquire new solutions to enhance their own offerings.

  1. 5G will usher in the next level of cloud adoption globally

Recently, Ericsson Mobility predicted that there will be 1 billion 5G subscriptions by 2023 and they’ll account for almost 20 percent of the entire global mobile data traffic.[1] Besides the massive increase in speed provided by 5G technology, it also comes with a remarkable decrease in latency. While 3G networks had a latency of nearly 100 milliseconds, that of 4G is about 30 milliseconds, and the latency for 5G will be as low as 1 millisecond, which most people will perceive to be nearly instant. With this type of performance, we believe that cloud-based services will become more reliable and efficient. Not only that, but 5G may also accelerate cloud adoption in countries that are lacking wired infrastructure today.

Without a crystal ball, there is no way to know for sure what the market landscape will look like in the coming months. But by analyzing recent trends and considering their implications for the future, companies can take a forward-looking approach that will position them to stay ahead of the curve and be ready to seize opportunity as it arises. This year is looking bright for the cloud.

Bio

Mark Kirstein is the vice president of products at BitTitan, leading product development and product management teams for the company’s SaaS solutions. Prior to BitTitan, Mark served as the senior director of product management for the mobile enterprise software division of Motorola Solutions, continuing in that capacity following its acquisition by Zebra Technologies in 2014. Mark has over two decades of experience overseeing product strategy, development, and go-to-market initiatives.

When not on the road coaching his daughter’s softball team, Mark enjoys spending time outdoors and rooting for the Boston Red Sox. He holds a bachelor’s degree in computer science from California Polytechnic State University.

[1]How 5G will Accelerate Cloud Business Investment,” Compare the Cloud.net. Retrieved December 17, 2019.

5 Must-Have Security Tips for Remote Desktop Users

By Jake Fellows, Associate Product Manager, Liquid Web

Windows servers are typically managed via the Remote Desktop Protocol (RDP). RDP is convenient and simple to set up, and Windows has native support on both the client and the server. RDP clients are also available for other operating systems, including Linux, macOS, and mobile operating systems, allowing administrators to connect from any machine.

But RDP has a significant drawback: It’s a prime target for hackers.

In late 2018, the FBI issued a warning that RDP was a vector for a large number of attacks, resulting in ransomware infections and data thefts from many businesses, including healthcare businesses.

In 2019, researchers discovered several critical vulnerabilities in RDP that impacted older versions of Windows. The BlueKeep vulnerability is a remote code execution vulnerability that allowed an unauthenticated user to connect to the RDP server and execute arbitrary code via malicious requests. Additional security vulnerabilities were discovered later in the year.

Businesses use RDP because it is the most convenient way to provide remote desktop services for Windows servers and desktops, but it’s a relatively old protocol that was not initially designed with modern security best practices in mind.

However, RDP can be made more secure with a few configuration changes and best practices.

Avoid Guessable Passwords

Windows servers are often compromised with dictionary attacks against RDP. Attackers know hundreds of thousands of the most commonly used passwords, and it’s trivial to script a bot that can make repeated login attempts until it discovers the correct credentials.

It isn’t just the usual suspects such as “123456” or “pa55word” that should be avoided. Any simple password you can think of is likely already in a dictionary culled from leaked password databases. It is also important to ensure that you don’t reuse passwords that you use elsewhere on the web.

If you are the only administrator who manages the server, be sure to generate a long and random password that attackers can’t guess. If other people access the server over RDP, consider using the built-in Password Policy system to implement policies that enforce minimum complexity and length requirements.

Update RDP Clients and Servers

Attacks against RDP frequently exploit vulnerabilities in outdated server and client software. Older versions of RDP also lacked the cryptographic protections of more modern versions. As we have already mentioned, it is not uncommon for serious vulnerabilities to be discovered in older versions of RDP.

When the BlueKeep vulnerability was discovered, Microsoft quickly released a patch that, if installed, would close the security hole. But clients and servers only benefit from that protection if they are regularly updated.

Windows Server can automatically update RDP via Microsoft Updates, but server administrators should verify that they are running the most recent version. Automatic updates can be turned off and many server administrators don’t like to risk the disruption that an automatic update might cause. It’s always worth checking to make sure your servers are running patched and secure versions of RDP.

Don’t forget to update third-party RDP clients, too.

Connect Over RD Gateway or a VPN

Using RDP over the internet without an SSL tunnel is dangerous. RDP encrypts traffic flowing between the client and the server, but it may be vulnerable to certain types of attack; plus, the RDP port is exposed to brute-force attacks and denial of service attacks. Because of the potential security risk of exposing an RDP server to the open internet, it’s a good idea to put it behind a gateway that provides better security.

An RD Gateway allows clients to create an SSL-protected tunnel before connecting to the RDP server. The RDP server only accepts connections from the gateway. It is not exposed to the open internet, limiting the attack surface and preventing attackers from directly targeting the server.

Connecting over a VPN is a reasonable alternative to an RD Gateway, but it is less secure and may introduce unacceptable latencies.

Restrict Connections Using the Windows Firewall

If you know which IPs will connect to your RDP server, you can use the firewall to restrict access so that IPs outside of that scope will be rejected. This can be achieved by adding IP addresses to the RDP section of the firewall’s inbound rules.

Change the Default RDP Port

The default RDP port is port 3389, and that’s where most brute-force attacks are directed. Changing the port is a straightforward way to reduce the number of bot attacks against your server.

To change the RDP port, adjust the following registry key to the new port number:

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp\PortNumber

Changing the port is not a substitute for implementing the other security tips we’ve mentioned in this article. While it may be enough to confuse unsophisticated bots and inexperienced hackers, more knowledgeable and sophisticated attackers will have little trouble finding the new port, so changing the port is not sufficient to adequately protect RDP from attack.

It is possible to implement even stricter security strategies to protect your RDP server from attacks, including the addition of two-factor authentication. However, following the tips we have outlined here will be enough to keep your server safe from the vast majority of attacks.

Bio

Jake Fellows is an Associate Product Manager for Liquid Web’s Managed Hosting products and services. He has over ten years of experience involving several fields of the technology industry, including hosting, healthcare, and IT system architecture.

Why Exchange 2010 Users Can’t Afford to Delay Their Software Upgrades – and How MSPs Can Help

By David Mills, Director of Product Management, BitTitan

It’s been a decade since the 2009 release of Exchange Server 2010, which means the lifecycle for this Microsoft product is soon coming to an end. Originally scheduled for January 14, 2020, Microsoft recently extended the end-of-support date to October 13, 2020. This may be welcomed news for businesses still relying on Exchange 2010, but it should also serve as a wake-up call: The time to upgrade is now.

In announcing the end-of-support deadline extension, Microsoft stated it was doing so “to give Exchange 2010 customers more time to complete their migrations.” These migrations require a considerable amount of time and planning to successfully deploy and complete – and keeping the project on schedule is a task within itself. Businesses should not delay their upgrades, as there are serious ramifications, and Microsoft will not extend the deadline again.

This is where it is critical for managed service providers and IT professionals to step in and advise their clients of the necessary upgrades they need to make. Doing so is win-win for IT pros and their clients, as it builds trust, ensures the health of a customer’s business remains strong and enables the continued business growth for all parties.

The Potential Risks

So, what are the risks businesses face if they don’t upgrade their software? There are quite a few. During a product’s lifecycle, Microsoft provides a substantial number of new features, bug fixes and security updates. Once the end-of-support deadline passes, Exchange 2010 users will not receive technical support from Microsoft for issues that may occur. They will not receive bug fixes for issues that arise that affect the usability of their server. They won’t receive security patches for vulnerabilities that are found. These businesses will face an increased risk of data breaches and malicious cyberattacks. In addition, depending on the compliance regulations of their industry, these businesses may become liable to legal issues for falling out of compliance.

It’s a harrowing outlook, but the good news is there are practical courses of action businesses can take to remedy their situation.

The Most Viable Solutions

Primarily, there are two options that are most ideal for organizations looking to upgrade. For those considering a full transition to cloud technologies, a fitting course of action may be an upgrade to Exchange Online/Office 365. Taking this approach is typically the most reliable and ensures that users will receive regular software updates from Microsoft. End users will have the latest feature enhancements provided in the cloud Office suite. From Microsoft’s perspective, this is likely the preferred route, though subscribers must be vigilant of price increases.

However, not all businesses are ready to abandon on-premises systems just yet. For those that require on-prem hardware, upgrading to Exchange Server 2016 or 2019 may be the way to go. This option offers businesses more control over their email data, as well as a breadth of backup and recovery options for their workplace systems. It must be noted that when pursuing this option, businesses migrating from Exchange 2010 must conduct a “double-hop” migration when moving data to Exchange 2019, and first migrate to Exchange 2013 or 2016. This can seem like a tedious step to add to an already complex process. Employing a third-party migration tool – such as BitTitan’s MigrationWiz – can eliminate this step and afford the ability to migrate directly to Exchange 2019.

Taking a Broader Approach

There is another wrinkle as to why now is an important time to facilitate migrations for customers: Exchange 2010 isn’t the only product that Microsoft will no longer support in 2020. An end-of-life deadline is set for Windows 7 on January 14, 2020. Nine months later, Microsoft will discontinue support for both SharePoint Server 2010 and Office 2010 on October 13, 2020. That’s a considerable number of products reaching their lifecycle end in a short amount of time – and it creates an opportune timeframe for MSPs to potentially bundle migration projects for customers.

MSPs and IT pros can delve into larger workplace upgrades and digital enhancements for clients. They can potentially explore overseeing multiple upgrades for these products at once and ensure that a stable and secure workplace plan is established for the long term.

For IT pros and their clients, staying on top of the end-of-support date goes beyond simply upgrading software. By not making the necessary upgrades, the health and well-being of a customer’s business is at stake. Making sure clients are running software and relying on workplace systems that are appropriately upgraded, secure and compliant eliminates these threats and vulnerabilities. It ensures that business for both IT pros and their clients continues to successfully hum along.

Bio

David Mills is Director of Product Management at BitTitan, driving product strategy, defining product roadmaps and ensuring customer success. David is an experienced product management leader with more than two decades of industry experience. Prior to BitTitan, he worked as a principal consultant at PricewaterhouseCoopers, a product manager at Microsoft and director of product management at Avanade. His areas of expertise include product planning, cloud infrastructure and applications, and marketing communication.

Small and Medium Businesses Are More Vulnerable to Cyberattacks

Tips for Small Businesses on How to Enhance Cybersecurity

By Daniel Markuson, Digital Privacy Expert, NordVPN

According to the study conducted by the Ponemon Institute, only 28% of small and medium businesses mitigate cyber threats, vulnerabilities and attacks effectively. The study revealed that nearly half of the companies have no understanding of how to protect their data, finances, employees and customers against cyberattacks.

However, small businesses may often be even more attractive targets for hackers than larger enterprises. Here are some of the reasons:

  1. The owners of valuable data. Contrary to what most of the small companies may think, they do have useful data for hackers. It can be anything from financial information that can be used for fraud, to the personal details valuable for identity theft.
  2. The path to other companies. Often hackers target small companies for easy access into larger enterprises. It can also be a path into the data of many other small businesses.
  3. Easy to hack. Small businesses often lack adequate cyber-defenses, so they are frequently much easier to hack compared to larger enterprises. There are usually no security personnel and technology in place, so it’s also more challenging to detect an attack when it occurs. Effective handling of cyber threats is impossible without a strategy and strict policies applied to all employees.
  4. More difficult recovery. Every small business has computer-based data it needs to operate. Unfortunately, few can recover from an attack independently. However, a cyberattack might be the end of the road, especially for a small business. Therefore, small business owners are more likely to pay ransoms.

Ransomware and spear-phishing attacks are the most common cybercrime tactics used against small businesses. The first one blocks access to a computer or mobile phone until the attackers receive a ransom payment. The second one is an email-spoofing attack seeking unauthorized access to valuable information. There are hundreds of different ways to harm any enterprise, its employees or customers. Some of the most usual methods don’t even require advanced technological knowledge. For example, social engineering schemes are easy and effective to launch.

Simple tips for small business owners to boost cybersecurity

Do regular backups. Regular backup of your data in a secure location – offsite and offline – is essential. It helps to protect yourself from a ransomware attack. For small businesses with less sensitive data, even external hard drives might be enough. For more significant comfort, consider special paid backup security services (don’t trust free ones).

Secure all your smart devices. Cybersecurity is not limited to your smartphone, tablet or computer. These days, even printers and TVs are connected to the internet, so make sure these are secure as well. If the password and even username are insecure, change them. Additionally, restrict admin privileges to your networks and accounts. Each team member must have their personal credentials with an assigned role. This way you will always know who made a mistake.

Secure all your data. Encrypting your data makes it more difficult to exploit or hijack. A reliable and reputable VPN service provider, like NordVPN, SurfShark or ProtonVPN, can encrypt the online traffic of all your employees. This ensures that your data is safe when they need to access it. Many small business owners and employees work at office hubs or at home, so their data gets sent through unsecured channels. A reliable VPN can fix this problem. Of course, don’t forget that you need an antivirus and a strong firewall.

Educate your team members. It is essential to cultivate the secure mindset of every employee. Keep your team members informed about the dangers of downloading attachments or clicking on links from unknown sources. Make sure to educate them about social engineering tactics, latest hacks and phishing attacks. You can use an online cybersecurity test to understand how much your employees know about digital security.

Always update your devices. Don’t forget to update your computers, tablets, smartphones and other devices regularly. Do the same for software. New updates fix security vulnerabilities and system bugs that could cause insecure situations. Make sure to update your firewalls and antivirus.

Create a strong password. Use unique passwords for different accounts or devices. Make sure to create strong passwords and change them every three months. It’s also crucial for your company to have a strict password policy and ensure that all employees comply with it. Additionally, share some tips with your colleagues on how to create strong and reliable passwords.

An average data breach costs $3.92 million, and that’s a heavy burden on small and medium enterprises. Leaks drive away clients, plus companies end up paying millions in fines and compensations. Even though cyberattacks often target SMEs, the media focuses only on the big hacking scandals. That’s why small company owners tend to think only of major corporations with vast amounts of valuable data as the primary targets. Consequently, SMEs often do not take the most basic steps to protect their digital resources. It’s time to understand that your business’s security is in your own hands.

Bio
Daniel Markuson is a Digital Privacy Expert and Internet security enthusiast at NordVPN. Daniel is generous with spreading news, stories and tips on how to stay secure in the fast-changing digital world.

Overcome Teams Migration Challenges with Agility and the Right Tools

By David Mills, Director of Product Management, BitTitan

In the unified communications market, Microsoft Teams has proven to be a dominant player, with adoption rates surging. In July, Microsoft reported that Teams has reached 13 million daily active users to outpace rival platforms. By comparison, its primary competitor, Slack, recently reported 12 million daily active users.

Further fueling Teams’ success is the year-over-year increase of activity in the mergers and acquisitions (M&A) market. In a report on 2019 M&A trends from Deloitte, industry experts from corporate and private-equity organizations overwhelmingly predict a sustained increase in M&A deals over the next 12 months. Considering Microsoft’s strong market presence with Office 365 products and services – particularly among larger organizations – this M&A activity is likely to increase adoption of Teams when smaller companies migrate from other platforms. And this increased activity of merging and separating businesses is driving the need for Teams migration projects.

However, a multitude of hurdles exist, as vendors and businesses are searching for an ideal solution to handle Teams migrations. So, what specifically are the difficulties standing in the way?

Three Challenges

The first challenge facing MSPs and IT professionals is that Microsoft has yet to release full fidelity for its Teams migration API, so IT pros must rely on what’s available via Microsoft’s Graph API and SharePoint API. This is not ideal because these solutions need further refinement to enable seamless and efficient Teams migrations.

The second challenge surrounds the complexity of Teams, as the platform is comprised of many individual components, such as Teams, channels, conversations and user permissions. All these parts need to be migrated in the proper sequence, along with the underlying SharePoint site and folder structure.

Finally, when conducting a Teams migration in a merger scenario, it is not uncommon to encounter Teams or channels that have the same names or username conflicts. These issues can present migration problems that can lead to extended downtime for your users or customer. It is important that MSPs and IT professionals be aware of these challenges before beginning a Teams migration. A little planning will help avoid obstacles and ensure a successful migration.

Solutions on the Market

As MSPs and IT professionals search for the ideal Teams migration tool, there are a few important requirements to consider. First, look for a tool that has the scalability to move an abundance of files and handle large workloads. Given the complex nature of Teams, migration tools must also provide flexibility. Many companies are increasingly wanting to conduct partial migrations and restrict the movement of specific files during a migration, deviating from the normal “lift-and-shift” approach.

Reliable solutions for Teams migrations are becoming available on the market. Earlier this year, BitTitan added Teams migration capabilities to MigrationWiz, its 100-percent SaaS solution for mailbox, document and public-folder migrations. These capabilities enable MSPs and IT professionals to migrate Teams instances and their individual components, including Teams, channels, conversations and permissions. MSPs and IT pros can leverage MigrationWiz to conduct a pre-migration assessment to better gauge the timeline of a Teams migration, the number of required licenses and an overall estimate of the project scope and cost.

BitTitan continues to release Teams migration enhancements that allow MSPs and IT pros more flexibility when conducting Teams migrations. These updates offer MSPs and IT pros some compelling capabilities, including the ability to:

  • Rename Teams in bulk from the Source to the Destination to avoid file-name duplication and username conflict.
  • Exclude guest accounts from the overall assessment count.
  • Move conversation history to the Destination while maintaining similar formatting from the Source.
  • Support Teams instances of U.S. government tenants. This is a crucial sector of the market that requires careful and calculated action when conducting migrations to ensure compliance and security regulations are met.

The new Teams migration features are the result of soliciting partner feedback on how to best meet their needs, with more updates to come soon.

“BitTitan really stepped up for this project,” said Chuck McBride, founder of Forsyte IT Solutions. “We looked at several other solutions, but when we scoped the size of the project and workloads, BitTitan brought us the best option for everything we wanted to do.”

Adopting an Agile Approach to Teams

With the absence of a full-fidelity API from Microsoft, MSPs and IT professionals continue to refine the process for migrating Teams to deliver the most seamless migration possible. As updates and enhancements continue to roll out, MSPs and IT pros must adopt an agile approach to continually meet the evolving needs of users and customers, and ensure they’re leveraging the most current APIs for migrations.

By assessing the landscape up front, leveraging available tools and maintaining an agile approach, MSPs and IT pros will position themselves to successfully meet the growing demand around Teams migrations – and they’ll be well-positioned to address the challenges that arise.

Bio
David Mills is Director of Product Management at BitTitan, driving product strategy, defining product roadmaps and ensuring customer success. David is an experienced product management leader with more than two decades of industry experience. Prior to BitTitan, he worked as a principal consultant at PricewaterhouseCoopers, a product manager at Microsoft and director of product management at Avanade. His areas of expertise include product planning, cloud infrastructure and applications, and marketing communication.

Innovation Comes from Listening to Customer Needs

By Sandi Renden, Director of Marketing, Server Technology

Product improvements are not solely the result of product management influences.

In many cases, the most innovative products are the result of customer feedback and they are often the most successful. To remain relevant, products must be in lockstep with customers’ changing needs to enhance their experiences. And data centers are a perfect example of an industry that must constantly adapt to change.

Why is the data center industry a good example? Because workloads need elastic processing abilities, servers have gone virtual and networks are sprawling at the edges. As this continues to happen, the power required to run these environments must be as flexible as their hardware and software counterparts. Intelligent rack power distribution unit (PDU) manufacturer, Server Technology, knows all too well how fast data center power requirements can quickly decrease a product’s usefulness when it comes to supporting changing rack devices. However, they also have a history of circumventing this unfortunate situation and exceling where other PDU and rack mount power strip manufacturers often struggle and sometimes fail.

Marc Cram, Director of New Market Development for Server Technology, shares some insights into how his company is able to quickly pivot product manufacturing and redesign data center PDUs to fit today’s elastic workload environments. Spoiler alert: their success comes from listening to their customers and allowing them to design their own PDUs.

Turning Pain into Gain

Where do good inventions truly come from? Willy Wonka states the secret to inventing is, “93% perspiration, 6% electricity, 4% evaporation and 2% butter-scotch ripple.” Although this may be practical for creating Everlasting Gobstoppers, in the data center environment, game-changing inventions are predicated on more simplistic methods. And perhaps the simplest, but successful stimulus for inventors comes from listening to customers’ pain points.

Cram says that Server Technology was founded by listening to customers and figuring out how to satisfy as many of their power needs—with a single PDU design. “It’s a tradition that continues to this very day; we still do leading-edge work for our customers by listening to their specific needs and turning that information into targeted products for their exact applications,” he says.

Cram understood that data centers were traditionally built in a raised floor environment and the IT managers were in the same facility. This situation made it easier for managers to frequently replace rack mount power strips as servers were swapped out. As time went by and the data center industry evolved, servers and racks were not necessarily located on the same premises where the IT managers were residing. Listening to customers, it became clear that having the ability to remotely read the rack power status made a tremendous amount of sense and alleviated the pain of traveling between data centers to read or reset PDU devices.

However, all customer needs are not created equal and some organizations did not want remote management capabilities. Rather, they voiced the need for PDUs to be equipped with alarm capabilities instead. “Banks are a good example of this,” Cram said. “The last thing a bank wants is for somebody to come in and turn off their rack power supply that just happens to be processing someone’s ATM transaction. You don’t ever want it to be interrupted.”

The Difference Between Hearing and Listening

Hearing is the act of perceiving auditory sounds versus listening, which is the act of paying attention to sounds and giving them consideration. Listening to customers allowed Server Technology to jump directly to a Switched PDU from a basic, unmanaged PDU. Cram says that by listening to customers, the company discovered a need for a power strip that had remote monitoring capabilities, but also provided individual outlet controls. Cram noted, “this is where the ‘smarts’ in our products came from.”

A similar listening/consideration process was also undertaken when Server Technology developed outlet power sensing. It was learned that customers like the per-outlet sensing capabilities, but they did not like the control. With this information, the company created smart PDU options. Now, Server Technology is offering five different PDU levels: Basic (power in/power out) Metered, Switched, Smart Per Outlet Power Sensing (Smart POPS) and Switched POPs.

The flexibility of five distinct data center PDU offerings, as well as the High-Density Outlet Technology (HDOT) line of PDUs, decreases the need to go back and reconfigure rack power when new devices are added. Cram says that “whether the need is for a full rack-of-gear or a rack that starts its life with three servers and a switch then eventually is used for some other configuration, Server Technology’s family of PDUs can handle the entire transition.”

The innovation behind the HDOT and the HDOT Cx resides in the ability that enables customers to select what outlet types they want as well as have them placed in the desired location on the PDU.  “You can reconfigure the rack to plug in a different device into the same CX outlet,” Cram says. For example, a customer populated a rack with 1u height servers with C13 outlets. Using the HDOT Cx would give them the ability to remove servers and add a high-end Cisco router or another big-power device that requires C19 outlets. The HDOT Cx outlet provides the flexibility they need without throwing away the original PDU.

Perhaps the ultimate result of listening to customers’ concerns comes in the ability Server Technology has given its customers to actually “build your own PDUs” or BYOPDU. This power strip innovation provides a website where customers may configure the exact type of outlets needed, based upon the PDU’s intent and initial use. By specifying the CX modules, the customer has extreme flexibility and the opportunity to extend the life and usability of each power strip.

Listening, Not Hearing, Pays Dividends

Customer feedback is one of the greatest sources of product inspiration and listening is a skill that needs to be developed to ensure useful evolution. Incorporating feedback to advance products will benefit entire industries—and creating a perpetual feedback/innovation loop ensures a steady stream of improvements. Aside from the flexible HDOT PDU family, Server Technology also developed other PDUs that distribute 415VAC, 480VAC or 380VDC—all in response to customer feedback and customer needs. “In an industry where rigidity breeds stagnation and stagnation impedes a data center’s ability to efficiently process workloads, customers’ voices are the inventor’s greatest ally,” Cram concluded.

Bio
Sandi Terry Renden is Director of Marketing at
Server Technology, a brand of Legrand in the Datacenter Power and Control Division. Sandi is a passionate leader and creative visionary, with over 25 years of management, digital marketing and sales execution experience, with a proven track record of success recruiting and retaining talent, hitting sales targets and developing multi-channel digital marketing and branding campaigns for non-profit and profit organizations in both B2B and B2C. She has international working and cultural (residency) experience on three continents (Americas, Asia and Europe.) Sandi has earned a BA in Marketing from the University of Utah and an MBA in Marketing.

Alert Logic Report Reveals Wealth of Vulnerabilities for SMBs

By Rohit Dhamankar, Vice President, Threat Intelligence, Alert Logic

When it comes to incorporating strong cybersecurity hygiene into their practices, small and midsize businesses (SMBs) sometimes don’t realize how susceptible they are to cyber attacks. They read the latest news about a big-name organization getting hacked and conclude that this would never happen to a “small fish” company like theirs.

But they are mistaken.

Due to increasingly automated attack methods, cyber adversaries aren’t distinguishing between “big” and “small” fish anymore. They’re targeting vulnerabilities, with automation that empowers them to cast a wide net to cripple SMBs and large enterprises alike. New research from Alert Logic indicates that lack of awareness may be leading to a wealth of exposures for SMBs: A clear majority of their devices are running Microsoft OS versions that will be out of support by January 2020, and most unpatched vulnerabilities in the SMB space are more than a year old.

What Alert Logic’s New Findings Really Say

These and other findings from the Alert Logic Critical Watch Report 2019 should serve as an eye-opener for SMBs. Our analysis was based on 1.3 petabytes of data from more than 4,000 customers, including data from 2.8 million intrusion detection system (IDS) events and 8.2 million verified cybersecurity events. Here are highlights from the report that illustrate the most significant challenges we found:

Digging into the Numbers

More than 66 percent of SMB devices run Microsoft OS versions that are expired or will expire by January 2020. There’s little representation, in fact, of the current Windows Server release – 2019 – among this group and the majority of devices run Windows versions that are more than ten years old. Even if not exposed to the internet, these versions make it easy for attackers to move laterally within systems once they compromise a host.

Three-quarters of the top 20 unpatched vulnerabilities in the SMB space are more than a year old. Even though automated updates have improved software patching, organizations struggle to keep up the pace. The use of open source software – a common technique for building software projects efficiently – complicates the patch cycle, especially when the open source software is embedded. To uncover and reduce the vulnerabilities left by unpatched code, organizations must invest in third-party validation of the efficacy of the update process in their software development life cycle (SDLC) while conducting regular vulnerability scans.

Security Challenges SMBs Face

Weak encryption continues to create headaches, accounting for 66 percent of workload configuration problems. Unfortunately, many SMBs simply implement a default encryption for a particular app. Defaults were typically defined when older encryption protocols were still considered safe but might no longer be. It’s not surprising then that our research found that 13 encryption-related configuration flaws are leading to 42 percent of all security issues found.

Nearly one-third of the top email servers run on Exchange 2000, which has been unsupported for nearly 10 years. Email is the life blood of most businesses, so SMBs place their operations, sales and other critical functions at risk if they encounter newly identified vulnerabilities for which there are no available patches.

The three most popular TCP ports – SSH (22/TCP), HTTPS (443/TCP) and HTTP (80/TCP) – account for 65 percent of all vulnerabilities. Internal security teams should regularly scan ports to determine weaknesses and firewall misconfiguration issues, as well as whether unusual, possibly harmful services are running on systems. In addition, they need to close ports that are no longer in use; install firewalls on every host; monitor and filter port traffic; and patch and harden any device, software or service connected to ports.

Half of systems are running a version 2.6 Linux kernel, which has been out of support for more than three years. There are at least 69 known vulnerabilities for this kernel level, with many relatively easy to exploit. Kernels serve as the heart of an operating system, managing hardware, memory, apps, user privileges and an assortment of other key functions/components.

What to Think About Next

An obvious answer for SMBs is to inventory their cyber ecosystem and replace systems that have outlived support. But this is impractical for many. Resource constraints and inability to scale often prevent SMBs from upgrading and they struggle to apply best practices in patching, hardening and cyber hygiene. These organizations don’t have to go it alone, however, and can partner with security providers who offer strong but cost-conscious options to provide needed threat visibility, intelligence and security and compliance experts. With this support, SMBs can better defend existing infrastructure while addressing security challenges that occur during upgrades or migrations to the cloud.

Bio
Rohit Dhamankar is vice president of threat intelligence products at Alert Logic. Dhamanker has over 15 years of security industry experience across product strategy, threat research, product management and development, technical sales and customer solutions. Prior to
Alert Logic, Dhamanker served as vice president of product at Infocyte and founded consulting firm Durvaanker Security Consulting. He holds two Masters of Science degrees – one in physics from The Indian Institute of Technology in Kanpur, India and one in electrical and computer engineering from University of Texas – Austin.

 

The 5 Best Practices in Data Center Design Expansion

By Mark Gaydos, Chief Marketing Officer, Nlyte Software

When it comes to managing IT workloads, it’s a fact that the more software tools there are, the more risk and complexity is introduced. Eventually, the management process becomes like a game of Jenga, touching a piece in the wrong manner can have an adverse reaction on the entire stack.

In the past, data center managers could understand all the operational aspects with a bit of intuitive knowledge plus a few spreadsheets. Now, large data centers can have millions, if not tens of millions of assets to manage. The telemetry generated can reach beyond 6,000,000 monitoring points. Over time, these points can generate billions of data units. In addition, these monitoring points are forecasted to grow and spread to the network’s edges and extend through the cloud. AFCOM’s State of the Data Center survey confirms this growth by finding that the average number of data centers per company represented was 12 and expected to grow to 17 over the next three years. Across these companies, the average number of data centers slated for renovation is 1.8 this year and 5.4 over the next three years.

Properly managing the IT infrastructure as these data centers expand is no game-of-chance; but there are some proven best practices to leverage that will ensure a solid foundation for years to come.

5 Must Adhere-To Designs for Data Center Expansion:

  1. Use a Data Center Infrastructure Management (DCIM) solution. As previously mentioned, intuition and spreadsheets cannot keep up with the changes occurring in today’s data center environment. A DCIM solution not only provides data center visualization, robust reporting and analytics but also becomes the central source-of-truth to track key changes.
  2. Implement Workflow and Measurable Repeatable Processes. The IT assets that govern workloads are not like Willy Wonka’s Everlasting Gobstopper—they have a beginning and end-of-life date. One of the key design best practices is to implement a workflow and repeatable business process to ensure resources are being maintained consistently and all actions are transparent, traceable and auditable.
  3. Optimize Data Center Capacity Using Analytics and Reporting. From the moment a data center is brought to life, it is constantly being redesigned. To keep up with these changes and ensure enough space, power and cooling is available, robust analytics and reporting are needed to keep IT staff and facility personnel abreast of current and future capacity needs.
  4. Automation. Automation is one of many operational functions that IT personnel perform. This helps to ensure consistent deployments across a growing data center portfolio, while helping to reduce costs and human error. In addition, automation needs to occur at multiple stages, from on-going asset discovery and software audits to workflow and cross-system integration processes.
  5. Integration. The billions of data units previously mentioned can be leveraged by many other operational systems. Integrate the core DCIM solution into other systems, such as building management systems (BMS), IT systems management (ITSM), and with virtualization management solutions such as VMware and Nutanix. Performing this integration will synchronize information so that all stakeholders in a company may benefit from a complete operational analysis.

Find a Complete Asset Management Tool

Technology Asset Management (TAM) software helps organizations understand and gain clarity as to what is installed, what services are being delivered and who is entitled to use it. Think of TAM as being 80% process and 20% technology. Whatever makes the 80% software process easier, will help the IT staff better manage all their software assets. From the data center to the desktop and from Unix to Linux, it does not make a difference—all organizations need visibility into what they have installed and who has access rights.

A good asset manager enables organizations to quickly and painlessly understand their entire user base, as well as the IT services and software versions being delivered. Having full visibility pays high dividends, including:

  • Enabling insights into regulatory environments such as GDPR requirements. If the IT staff understands what the company has, they can immediately link it back to usage.
  • Gaining cost reductions. Why renew licenses that are not being used? Why renew maintenance and support for items that the organization has already retired? Companies can significantly reduce costs by reducing licenses based on current usage.
  • Achieving confidence with software vendor negotiations. Technology Asset Management empowers organizations to know beyond a shadow of a doubt, what is installed and what is being used. Now the power is back in the company’s hands and not the software publishers.
  • Performing software version control. This allows companies to understand the entitlements, how this changes over time and who was using the applications. Software Asset Management allows for software metering to tell from the user’s perspective, who has, or needs to have, the licenses.

Accommodating Your Data Center Expansion

Complexity is all too often the byproduct of expanding data centers and it’s not subject to IT hardware and software only. To accommodate this expansion, facility owners are also seeking new types of power sources to offset OPEX. The AFCOM survey underscores the alternate energy expansion by finding that 42 percent of respondents have already deployed some type of renewable energy source or plan to over the next two months.

Selecting the Right IT Management Tool

Many IT professionals fall into the cadence of adding additional software and hardware to manage data center sprawl in all its forms, but this approach often leads to siloed containers and inevitably—diminishing returns from unshared data. When turning to software for an automated approach to gain more visibility and control over the additional devices and services connected, it’s important to carefully consider all integration points.

The selected tool needs to connect and combine with the intelligence of other standard infrastructure tools such as active directory and directory services for ownership and location. Additionally, the value of any new IT management tool that sums up the end-to-end compute system should be able to gather information utilizing virtually any protocol or if protocols are disabled or not available, and the baseline must have alternative methodologies to collect the required information.

IT Workloads are too Important to be Left to Chance

IT workloads are too important to be left to chance and managing data centers is not a game. Pinging individual devices at the top of the stack to obtain information only yields temporary satisfaction. There may be a devastating crash about to happen, but without knowing the stability of all dependencies—the processing tower could topple. Don’t get caught in a Jenga-type crisis. Help mitigate risks with management tools that offer intuitive insights up and down the stack.

Bio
As Chief Marketing Officer at Nlyte Software, Mark Gaydos leads worldwide marketing and sales development. He oversees teams dedicated to helping organizations understand the value of automating and optimizing how they manage their computing infrastructure.

Edge Infrastructure Meets Commercial Property

By Michael C. Skurla, Director of Product Strategy, BitBox USA

Until very recently buildings had tacked on the Smart prefix with promises of solving issues that we didn’t care about or hadn’t dreamed up yet. Such things as personal control solutions, even personal temperature control cropped up. Much of these smart extras were woven with grand marketing promises of future enabled ecosystems.

At the heart of the matter, the building industry has been highly proprietary and fractured, with each solution competing for monetary attention along with numerous other building trades on any construction budget. Lighting, security, irrigation, elevators, HVAC, wayfinding and dozens more; each vying for core competency and right to play – with a desire to gain revenue share of a fixed size pie of a commercial building budget.

With the introduction of IoT in the marketing envelope, a new lease was offered on an old marketing game in commercial buildings. There was a twist, however, since commercial building companies were somewhat behind the times compared to residential buildings. People had already enabled their home or at least embraced the advantages of Connected things in their lives powered by platforms such as Alexa, Siri and Nest by Google.

Commercial Building Gaps

Building management system (BMS) companies were exceptional at adapting to the game. With a core competency grown out of HVAC, BMS solutions superbly integrate and manage HVAC, and are sudo-network based. They naturally expanded their scope using their existing frameworks as a base and excelled at building automation. However, given the siloed building industry as a whole, they still lacked the expectations residential building occupants demanded in their homes and personal lives, while meeting the newly evolved demands of commercial buildings. These gaps included:

Scale – Managing one large building is one thing but managing hundreds under one portfolio is challenging – with existing building management frameworks being cost prohibitive on a scale.

Synergy – BMSes are good at control, but lack leveraging inter-system data sets. Hence, although the BMS may control lots of systems simultaneously, it lacks (without the customization) the learning and interacting process between the systems to leverage sensing platforms for greater efficiency and insight.

Micro-Analytics – While BMSes enable facility management and facility trades analytics, they could not be used beyond the facility context. The data remained private in the context of building infrastructure.

Commercial solutions needed a new methodology to address these needs, and the IT space, ironically already had it.

Enter IoT platforms.

For years, Edge Data Centers faced the same struggle as the market matured. It’s important to note that Edge data solutions are deployed en-masse – in the hundreds or thousands across large swaths of geography. Much of the same infrastructure used in commercial buildings, such as HVAC, security and power monitoring, are similar between traditional and Edge deployments. What differs, however, is the sheer quantity, technological diversity and geographic swath. Staffing these edge locations 24×7 is impractical hence the operations must be monitored and managed entirely remotely; while also using this monitoring technology to take on tasks typically handled by on-premises staff.

IoT Platforms Offer Scale

Unlike BMSes and SCADA systems of the past, IoT platforms at their core are built with the concept of diverse data at massive scale, and with the simplicity of installation and growth. Instead of relying on onsite commissioning and often custom programming to bridge the hardware, IoT platforms natively extract data from dozens of in-building protocols and subsystems. They also normalize the data and move it to a cloud location. Additionally, the setup of these solutions is vastly nimbler and generally consists of an Edge Appliance, wired and connected to a port that allows communications with a cloud infrastructure. Everything is then provisioned, managed and monitored remotely from the cloud – making this a perfect solution not only for Edge Data Centers, but the likes of commercial building portfolios.

IoT Platforms Bring Synergy

Given the number of subsystems in a building and the growing number of technologies and IoT sensing devices, there is an exceptional opportunity to leverage data between diverse systems. There is a significant amount of redundancy in these building trades in the way of sensing, which makes this technology, when viewed holistically, expensive to install and maintain. A prime example is evident in the simplicity of an office building meeting room where most likely there are three occupancy sensors detecting if someone is there. One for temperature control, another for lighting and security, and a third for a room reservation system. But why can’t one sensor provide all this data? Each of these requires wiring, programming and a separate system to monitor.

Beyond this, there is a strong case for external data to be applied and combined with in-building data for AI-related functionality. Google Maps for traffic information, external business databases, Twitter feeds; the sky is the limit.

IoT Platforms Enable Micro-Analytics

With all of this data collected in the cloud from a portfolio of sites, the data’s value is worth significantly more to the emerging field of Micro-service Analytics. These analytics services and visualization engines tap organized data-lakes, such as those provided by the IoT platform, and transform them into context-specific outcomes. Here are some possible scenarios:

  • Building data making actionable recommendations on building performance to reduce energy spend
  • IWMS (Integrated Workplace Management Systems) using the same data to analyze space utilization and recommend leasing adjustments
  • Retail marketing engines analyzing traffic patterns for merchandising

The analytics possibilities are endless through an ever-expanding marketplace of third-party micro-service companies, all enabled by the IoT platform and offering a consolidated API as a single source of “data” truth.

An Edge Site as a Commercial Building

IoT Platforms in the commercial property sector aggregate what is already integrated into buildings on a scale, to allow the building technologies to become alive as part of the business, and less of what is seen as a necessary evil of simple facility maintenance.

Traditional building technology solutions have met the mark on improving facility performance from an operation and maintenance perspective. It is time to move beyond this, however. Facility data can be used for considerably greater purposes to generate meaningful outcomes beyond the physical building when integrated with the breadth of other system data commonly referred to as “business operational information”.

This mix of information availability opens doors to analytics and visualization data, driven by analytics that has wide-reaching potential to have implications on the enterprises’ top and bottom line.

This certainly does not advocate an end to SCADA or BMS solutions, quite the contrary. The IoT platforms perform vital control and operations of some subsystems that should neither be duplicated nor replaced. An IoT platform layered on top of the systems discussed enhances the performance competency of the traditional silos of their core functionality to the best of their trade ability. This is done while leveraging the analysis of every bit of data, from vastly different angles, to impact the greater business good beyond just the facility sector.

In our fast-paced digitized infrastructure world merging various systems is critical to allow for profitable outcomes while enabling facility operators and managers to confidently make data-driven business decisions.

Bio
Michael C. Skurla is Director of Product Strategy for BitBox USA, which offers a single, simple and secure IoT platform solution for enterprises to collect, organize and deliver distributed data sets from critical infrastructure with a simple-to-deploy Edge Appliance with secure cloud access.