complete-network-visibilityA decade ago, organizations expected a disconnect between IT and other relevant business units. After all, support was little more than a cost center, necessary to keep enterprises up and running but outside line-of-business (LoB) revenue streams. Movement in cloud computing, big data and, more recently, the burgeoning Internet of Things (IoT) have caused this trend to do a one-eighty.

IT is now a critical part of any boardroom discussion, with total network visibility playing a lead role in a company’s pursuit of a healthier bottom line. According to Frost & Sullivan, in fact, the network monitoring market should reach $4.4 billion in just two years, double the market revenue in 2012. Of course, talking about the benefits of a “single pane of glass” is one thing; IT pros need actionable scenarios to drive better budgets and improve productivity. Here’s a look at the 10 top cases for total network transparency.

1) Security

As noted by FedTech Magazine, the enemy of IT security is a lack of visibility. If you can’t view your network end-to-end, hackers or malware can slip through undetected. Once inside, this presence is given free rein until it brushes up against continuously monitored systems such as payment portals or HR databases. Complete visibility lets admins see security threats the moment they appear, and respond without delay.

2) Automation

Single-pane-of-glass visibility also lets IT pros automate specific tasks to improve overall performance. Consider eDiscovery or big data processing; while you can configure and perform these tasks manually, the IT desk’s time is often better spent forwarding strategic business objectives. Total network visibility allows you to easily determine which processes are a good fit for automation and which are best left in human hands.

3) Identification

According to a recent Clearswift report, 74 percent of all data breaches start from inside your organization. In some cases employees are simply misusing cloud services or access points, whereas in others, the objective is to defraud or defame. Either way, you need to know who’s doing what in your network, and why. Visibility into all systems — and who’s logging on — helps combat the risk of insider threats.

4) Prediction

You can’t always be in the office. What happens when you’re on the road or at home but the network still requires oversight? Many monitoring solutions now include mobile support, allowing you to log in from a smartphone or tablet to check on current conditions. This is especially useful if you’re out of town but receive warning about severe weather moving in. Total visibility gives you the lead time needed to prep servers and backup solutions to handle the storm.

5) Analytics

Effective data analysis can make or break your bottom line. As noted by RCR Wireless, real-time network visibility is crucial here. The goal is interoperability across systems and platforms to ensure data collection and processing happens quickly enough to provide actionable insight into the key needs of your network.

6) Budget

With a seat at the boardroom table, CISOs and CIOs must now justify IT budget requests as part of their business strategy at large. Using a single pane of glass lets you showcase exactly where investments are paying off — analysis tools or intrusion-detection solutions, for instance — and request commensurate funding to improve IT performance.

7) Proactive Response

It’s better to get ahead than fall behind, obviously, but think of it this way: Network visibility lets you see infrastructure problems in their infancy rather than only after they affect performance. Proactive data about app conflicts or bandwidth issues gives you the upper hand before congestion turns into a backlog of issue tickets.

8) Metrics

Chances are you’ll be called to the boardroom this year to demonstrate how your team is meeting business objectives. Complete visibility lets you collect and compile key metrics that clearly show things like improved uptime, amount of data backed up or new devices added to the network.

9) Training

According to Infosecurity Magazine, 72 percent of IT professionals believe their company isn’t doing enough to educate employees about IT security. With insider threats at an all-time high, network visibility is critical to pinpoint key vulnerabilities and design effective training plans for employees to reduce the chances of a data breach.

10) End-User Improvement

Technology doesn’t always work as intended. And in many cases, employees simply live with poor function — they grumble but don’t report network slowdown or crashing apps. Without this data, you can’t improve the system at large. With total network insight, however, you can discover end-user pain points and take corrective steps.

Seeing is believing. More importantly, seeing everything on your network is actionable, insightful and bolsters the bottom line.

Application monitoring can help troubleshoot bandwidth bandits and other disruptions (credit: Jerry John | Flickr)

Cloud computing is a ready-made revolution for SMBs. Forget about server downtime; elastic computing and API-driven development is perfect for smaller organizations with project funding in the mere thousands of dollars.

All that agility is allowing information architects to think big — smartphone connectivity, IoT, lambda architecture — with existing app performance monitoring standards becoming more Web and socially aware.

Perfect world, right? Well, maybe a “perfectable” world. While developers are doing the elastic, agile thing — leveraging the power of pre-built tools through IFTTT or Zapier and getting Big Data tools from GitHub — they’re making assumptions about available bandwidth. They may even add Twilio to the mix so the company can SMS you in the middle of the night when their app hangs.

App Performance: ‘It’s Spinning and Spinning’

“I can’t do anything. It’s just keeps spinning,” you’re thinking. Classic Ajax loader. Users from a different era prefer freezing metaphors, but those are just as obvious, and don’t encompass today’s issues: “My email won’t send,” “My daily sales dashboard won’t load” and, now, “the whole neighborhood’s smart meters are offline.”

A new set of network demands are rounding the corner, foreshadowing a greater need for application performance monitoring: SIEM, Big Data, IoT, compliance and consumer privacy audits. It is the slow death of offline archiving. And for each, file sizes are on the rise and apps are increasingly server-enabled — often with heavy WAN demands.

Open Source, DIY and Buy-a-Bigger-Toolbox

Presented with bandwidth concerns, some support specialists (or DIY-minded developers, as that is often the SMB way) will turn to open-source tools like Cacti to see what they can learn. And they may learn a lot, but often the problem lies deeper inside an app’s environment. As one support specialist explained (known as “crankysysadmin” on Reddit), “It isn’t that easy. There are so many factors that affect performance. It gets even more tricky in a virtualized environment with shared storage and multiple operating systems and complex networking.”

Another admin in the Reddit thread agreed: In terms of app performance monitoring, he responded, “there’s no one-size-fits-all answer. What type of application are we talking? Database? SAP? Exchange? XenApp? Is it a specific workflow that is ‘slow’? What do you consider ‘fast’ for that same workflow?”

Event-Driven Heads-Up for App Hangs and Headaches

App usage spikes have many possible causes, which is precisely why a commercial app monitoring tool that is easy to use when you need it in a pinch can ultimately pay for itself. Depending on site-dependent update policies, types of applications support, regulatory environment, SLAs and cloud vendor resources, you’ll sooner or later be faced with:

  • Massive updates pushed or pulled unexpectedly.
  • Surprise bandwidth-sucking desktop apps.
  • Developer runaway apps.
  • App developer design patterns tilted toward real-time event processing.
  • Movement toward the more elastic management of in-house resources.
  • Management of bandwidth usage by cloud service providers.
  • A need to integrate configuration management with monitoring.
  • Increased support of operational intelligence, allowing for real-time event monitoring as described by Information Age.
  • Monitoring to develop application-dependent situation awareness.

The last of these, situation awareness, deserves an emphasis. Consider the impact of moving monthly reports to hourly, or a BI dashboard suddenly rolled out to distributor-reps. Situational awareness at the app level can ward off resource spikes and sags or even server downtime.

Identify What’s Mission-Critical

Whether the monitoring answer is open source or commercial depends partly on whether your apps are considered mission-critical. For some, VoIP and Exchange have been those applications. The SLA expectation for telephony, for example, is driven by the high reliability for legacy on-premises phone systems that rarely failed. SLAs for VoIP are often held to the same standard.

And what’s mission-critical is probably critical for job security. If the CEO relies on a deck hosted in Sharepoint for briefing at a major conference, and he can’t connect at the right moment — well, you may wish you had a bigger IT staff to hide behind.

CTA-BANNERS-downgrade

Related articles:

Are Your Mission-Critical Applications Starving for Bandwidth?

Noble Truth #5: Network and Application Performance Defines Your Reputation

It takes two to make a company.  Sounds like a cliché, you say?  Or is it a bold prediction by the retiring Cisco CEO?  In his last keynote as CEO, John Chambers envisioned that many large companies of the future will only have two employees – a CEO and a CIO.  All other functions will be outsourced.  Did Cisco Live exhibit trends that could support this type of a structure?  Let’s take a look at three trends that I observed:

Trend #1: Cloudy with no chance of rain

The writing is clearly on the wall for custom underutilized hardware.   Whether it is in networking, computing, or storage, customers are embracing (demanding?) the flexibility to purchase resources on demand; that is modular, inter-operable software that can be deployed on commodity hardware.  The hardware can be replaced at any time, with products from any vendor, without the applications missing a heartbeat. Workloads can be running anywhere without the worry of downtime (no rainy days!!). This can be in a private cloud (racks of commodity equipment) or elastic capacity bought in the public cloud.  More importantly, these applications and workloads are increasingly directly related to the business function provided by the company.  Any other supporting functions like sales, CRM, accounting, and collaboration tools are outsourced.  Clouds, public or private, are changing how the business is run.  Businesses will continue to buy capacity in an elastic manner to adapt to the changing business needs.  And all they will need is a CIO to make sure it all keeps working seamlessly.  At Cisco Live, this trend was clearly evident in Cisco’s Application Centric Infrastructure.

Trend #2: Connected Everything

On my way to Cisco Live, I traveled with my personal administrative assistant.  She told me when my airport shuttle was arriving, which terminal and gates my flight was leaving from, and which train I needed to take to get to the hotel.  And she never missed a beat. Photo 5 Her (his?) name is “Google Now” and s/he is helped by an army of robots connecting everything imaginable.  Needless to say, I didn’t need to visit the “Connected Everything” exhibits at Cisco Live to convince me that tomorrow’s traveling CEOs may not need a dedicated human assistant.  This function is being automated where possible and outsourced when not.

Trend #3: Eyes on Everything

So, who is the watch guard of all of this outsourced and automated infrastructure to make sure the critical nature of the business is not impacted?  Infrastructure monitoring was the 3rd prominent theme at Cisco Live.  More specifically, it was with the mindset of a “Single Pane of Glass“.  When business critical needs are met by a diverse set of resources, the need for a single pane of glass is even greater.  Automation and correlation of the raw data from multiple sources, translated into meaningful business metrics would allow both the CEO and the CIO to make decisions in real time when necessary and generate analysis to support their decisions.

Of course, we may sound overly imaginative with the notion of a two-person company.  But a two-function company (a business function and a technology function) is certainly within the realm of possibility.  And if Chambers is right, we will certainly see companies reorganizing around these two disciplines.

WUG banner
Our own single pane of glass just got better. Check out the new WhatsUp Gold version 16.3.

 

In just a few days we’ll be listening to “Auld Lang Syne” and watching the ball drop in Times Square. As we plan deeper into 2015 I found myself reading Gartner’s Top 10 Strategic Technology Trends for 2015 and want to share a few thoughts based on two of them:

  • Cloud/Client Computing: For businesses, Cloud/Client Computing has an additional component beyond Gartner’s omni-portable linkage between the cloud’s compute/data/management and client devices. Apps for the business cannot be viewed in isolation. Beyond data synchronization, IT will also have to address the integration layer between public cloud and private cloud, and between cloud and on premise applications, for rich sharing and use of data within business workflows.
  • Risk-Based Security and Self-Protection: We seem to have reached a tipping point that Gartner alludes to: security can no longer be fully managed by IT. There are just too many threats, and the paradigm shift of applications themselves pre-empting some of these threats will be welcome. Gartner correctly views this as part of a multifaceted approach. We believe that monitoring of how threats spread will lead to new dynamic response methodologies, perhaps bot-implemented, going well beyond today’s analysis of threat signatures. Stopping threats rather than dealing with their consequences is something for IT to look forward to.

Speaking of stopping threats, are you constantly on edge about the safety of your stored and transferred files? Using the right file transfer system is paramount in securing files and sensitive data. The MOVEit Managed File Transfer System is designed specifically to give control over sensitive data to the IT department, to ensure better security throughout the entire file transfer life cycle. Download our white paper entitled Security Throughout The File Transfer Life-Cycle to learn more.

As we head into 2015, what will the New Year have in store for IT? Only time will tell!

The evaluations are complete and the decision has been made, a move to the cloud is in the best interest of your organization. Transferring workloads to the cloud in order to free up or discard costly on-premise resources for the fast deployment and flexibility of an elastic environment has overwhelming appeal, but now what? Despite the many advantages of a cloud environment there are still pitfalls that need to be navigated in order to ensure a positive engagement and user experience. To that end, I would offer two pieces of advice to colleagues looking to transform their organization from a strictly on-premise environment to a cloud user.  dddd

First, pick the right provider. While this may seem like an obvious and simplistic statement, I can’t begin to stress how important this is and caution how many cloud transfers have met their untimely demise due to a less than adequate partner. When evaluating service providers there are certain non-negotiable items you must account for. Chief among them are security, reliability and responsiveness. Like it or not, there is an element of control you are ceding in this relationship and top-notch support and trust are paramount. You want a secure, integrated, centrally managed and easy-to-use environment with service level agreements (SLAs) that commit to minimum standards of availability and performance, especially at peak demand. Timely responses to change requests, backup needs and security patches are also key considerations.

Second, choose the right workloads. The cloud can be a powerful and efficient tool for your business, but it does not mean that every application is best suited to reside in a cloud environment. When developing your integration strategy keep in mind that low to medium security workloads, those without stringent latency requirements, and where the workload is elastic with variable traffic will work well.  Some workloads need data to be frequently pulled in-house for use by other systems and are perhaps best left in-house.  High-security and compliance monitoring needs are also more suited for on-premise use.  Keep integration requirements in mind as some workloads that are tied to proprietary hardware are also not good candidates for public clouds but may be fine for a private or hybrid environment.

The cloud can transform your organization if you manage it correctly, but it takes due diligence on your part to ensure that the move goes as planned. By doing your research ahead of time and developing a list of key considerations for your business, you can ensure that the process will be both smooth and successful.

 

 

The evaluations are complete and the decision has been made, a move to the cloud is in the best interest of your organization. Transferring workloads to the cloud in order to free up or discard costly on-premise resources for the fast deployment and flexibility of an elastic environment has overwhelming appeal, but now what? Despite the many advantages of a cloud environment there are still pitfalls that need to be navigated in order to ensure a positive engagement and user experience. To that end, I would offer two pieces of advice to colleagues looking to transform their organization from a strictly on-premise environment to a cloud user.  dddd

First, pick the right provider. While this may seem like an obvious and simplistic statement, I can’t begin to stress how important this is and caution how many cloud transfers have met their untimely demise due to a less than adequate partner. When evaluating service providers there are certain non-negotiable items you must account for. Chief among them are security, reliability and responsiveness. Like it or not, there is an element of control you are ceding in this relationship and top-notch support and trust are paramount. You want a secure, integrated, centrally managed and easy-to-use environment with service level agreements (SLAs) that commit to minimum standards of availability and performance, especially at peak demand. Timely responses to change requests, backup needs and security patches are also key considerations.

Second, choose the right workloads. The cloud can be a powerful and efficient tool for your business, but it does not mean that every application is best suited to reside in a cloud environment. When developing your integration strategy keep in mind that low to medium security workloads, those without stringent latency requirements, and where the workload is elastic with variable traffic will work well.  Some workloads need data to be frequently pulled in-house for use by other systems and are perhaps best left in-house.  High-security and compliance monitoring needs are also more suited for on-premise use.  Keep integration requirements in mind as some workloads that are tied to proprietary hardware are also not good candidates for public clouds but may be fine for a private or hybrid environment.

The cloud can transform your organization if you manage it correctly, but it takes due diligence on your part to ensure that the move goes as planned. By doing your research ahead of time and developing a list of key considerations for your business, you can ensure that the process will be both smooth and successful.

 

 

HealthIT According to a recent Ponemon Institute report, seventy-two percent of the 600 IT professionals surveyed believed their cloud service providers would fail to inform them of a data breach that involved the theft of confidential business data, and 71 percent believed the same for customer data.

Healthcare organizations have been hesitant to relinquish any perceived control over their information, and yet the investments and resources required to securely store and manage files “on-premise” has become a burden most facilities can no longer shoulder. IT teams lack the bandwidth and expertise to manage the growing volume and traffic of Protected Health Information (PHI). The move to the cloud has become inevitable because of the increasing complexity and burden of managing compliance processes.

Moreover, given the recent Omnibus ruling from September 2013, compliance with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) has never been more pervasive. With security breaches occurring at an alarming rate combined with the expansion of federal regulations, the push towards compliance has fueled businesses large and small to explore the necessary requirements – and options available – when it comes to achieving and maintaining HIPAA compliance.

Cloud-based solutions provide significant value for the healthcare industry, providing organizations with superior security and control when managing sensitive health data, especially PHI. In speaking with our customers in organizations required to adhere to HIPAA regulations, a cloud-based managed file transfer (MFT) solution offers numerous advantages: industrial-grade security, lower risk, reduced time and resources needed to achieve and maintain HIPAA compliance, higher reliability and availability backed up by service level agreements, and cost savings as IT staff is freed up to focus on other operational tasks.

The benefits of cloud provide a compelling reason for organizations to move to a managed cloud environment; here are a few best practices to keep in mind:

  • Invest in partners that are well-equipped to manage the breadth of HIPAA standards, and who are able to provide the tools needed to demonstrate compliance to your auditors;
  • Make sure to look for partners that provide a packaged HIPAA compliant environment that satisfies electronic protected health information (ePHI)-related legal obligations in HIPAA/HITECH legislation; and
  • Recognize from the start that your HIPAA compliance will usually involve a hybrid solution that combines both cloud and on-premise elements. A combination can provide the enabling “fabric” that will make it possible to do business moving forward.

To read more on this topic, check out my full article in HITECH Answers.

Cloud SecurityRecent news from Intralinks is just the latest where the security of Enterprise File Sync and Share (EFSS) vendors, like Box and Dropbox, are questioned. The EFSS rival reports that that generating links to share documents can put sensitive data at risk through several basic flaws. Once a link is generated to only be accessible by trusted sources, it turns out that it can actually be viewed by third parties (aka not the people you want accessing it). Intralinks said they discovered the vulnerability as part of Google AdWords research.

 While the companies are scrambling to address the issue and patch the flaw (at time of publishing Dropbox had issued a fix), it presents the opportunity to once again distinguish EFSS from Managed File Transfer. We spend a lot of time talking about this with customers and prospective clients and recently developed a White Paper and blog post on the topic. Check them out and let us know what you think.

I recently attended CIOboston, a CIOsynergy event headlined as “A New Dimension to Problem Solving Within the Office of the CIO”. We talked about paradigm shifts propelled by technologies like the cloud, the necessary new engagement models for business and IT and the changing world of expectations to name a few topics. But before getting to all this, our moderator Ty Harmon of 2THEEDGE posed the simple question to the attending 50 or so CIOs and senior IT heads: “What are your challenges?” ww

Here are the answers that I have assembled. I think there is value in seeing what was/is top of mind for IT leaders in raw form:

  • How do we make the right choices between capital and expense?  Service offerings are growing and additive – the spend never ends.
  • How do we integrate multiple cloud vendors to provide business value?
  • User expectations are being set by the likes of Google and Amazon for great UX, 7X24 support, etc. – but it is my IT staff that is expected to deliver all that on our budget. The business does not want to see the price tag – but they want the same experience that is available at home from these giants.
  • IT needs to run like a business but this takes a lot of doing. It matters how we talk and collaborate. We have to deliver business results that must be measurable.
  • Adoption of the cloud is a challenge. How do we assess what is out there? It is not easy to do apples-to-apples comparisons and security is a big concern.
  • How do we go from private to public cloud? Current skill sets are limited.
  • We are constrained by vendors that are not keeping up with the new technologies! One piece of critical software may want an earlier version of Internet Explorer to run; another may use an obsolete version of SQL Server, etc. This clutter prevents IT departments from moving forward.
  • Business complexity is a challenge. IT is asked to automate – but we must push back to first simplify business processes.
  • “Shadow IT” is an issue. A part of the business goes for a “shiny object” rather than focusing on what is the problem that really needs to be solved. They do so without involving IT. Then IT is expected to step in and make it all work, integrate with other software and support it.
  • Proving ROI is a challenge.
  • Balancing performance, scalability and security is tough.
  • How do you choose old vs. new, flexibility vs. security? It isn’t easy.
  • How do we support more and more devices?
  • How do you fill security holes that are in the cloud?
  • How do you manage user expectations, find the balance for supporting them when you have limited resources.
  • Many heads nodded as these challenges were spoken of.  But all agreed that these are exciting times and IT will push forward through them and be recognized as the true business enabler that it is. What are your thoughts—were you nodding your head at these questions?

Let’s face it: For many companies that handle payment card data, the search for a safe and secure way to store and transfer information in the cloud hasn’t always led to a feeling of full-blown confidence. And, the reality of so many new breaches doesn’t help.

While the road to PCI compliance can seem long and daunting, it is possible – and with the right guidance, can be easier than you thought. So, for those feeling like pulling their hair out, worry not!

Check out this article in Retail Online Integration in which I outline four actions that are important to making PCI compliance a tangible and achievable reality:

  • Understand the difference between PCI compliance and certification,
  • Get the business involved,
  • Develop a plan, and
  • Make education a priority.

Retailers and other companies required to be PCI compliant, we’d love to hear from you – please share your experience or questions.

managed file transfer predictionsTo kick off the year, we asked two of the leading influencers in Managed File Transfer (MFT) to share their perspectives on the year that was and give predictions on what 2014 has to hold.

Stewart Bond, Senior Research Analyst at Info-Tech Research Group (@StewartLBond), netted out the Managed File Transfer trends highlighting:

  • Cloud Deployments: MFT has traditionally been deployed behind the firewalls, used for internal and external file transfer. With the growth of the public Cloud, applications and platforms are moving outside the firewalls. If applications, data and platforms are in the Cloud, MFT vendors need to be there too. MFT grew out of the need for better security, control and management of file transfers. FTP is still prevalent, especially in the Cloud, and MFT vendors have a great opportunity to leverage their history and capabilities to make the Cloud and data pipelines to/from the Cloud more secure.
  • Mobile Access and File Transfer: Computing has gone mobile, and the need to protect corporate data assets as they move through secure and unsecure networks will be critical. MFT vendors have an opportunity to apply their technology in this space to help organizations reduce data protection risks.
  • File Transfer Acceleration: Primarily for cloud to cloud, and on-premise to cloud transfer. Where we have enjoyed fast data transfer rates on LANs and within the data center, data transfer rates over the internet are still lacking and until the infrastructure can catch up, if it ever will, software based acceleration solutions are becoming more prevalent.
  • Cloud File Sharing: We are seeing overlap between the MFT space and the Cloud file sharing space. Vendors such as Ipswitch are finding they are competing with the likes of Box, DropBox and other Cloud based file sharing solutions. MFT vendors have met the competition head on with ad-hoc file transfer capabilities. However, MFT vendors will need to make their solutions as accessible and easy to use as the Cloud based file sharing alternatives in order to compete effectively.

I’m interested in your thoughts on Stewart’s predictions—any points to expand on or debate?

 

Pizza Delivery is a Lot Like an SLA
Depending on the bus to deliver my pizza to paying customers is not going to cut it. I need transportation that I can bet my business on.

In my last post, I talked about moving pianos in a Yaris through 2 feet of snow, and the results you might expect from such an endeavor. I did my best to relate that effort to managed file transfer (MFT), and hope I didn’t lose you in the analogy. If you’ve gotten this far, I have another one to drop on you…

Let’s say for a moment that I like pizza…

Scratch that; let’s say I LOVE pizza.  (Bear with me…)  I have a problem, consisting of my need to retrieve my hot, fresh pizza back from my absolute favorite pizza place while it’s still hot and fresh (they don’t deliver) for dinner tonight. I’ve just phoned in the order from home, and the pizza shop is 2 miles away. Now, this problem has quickly turned into a transportation problem. I can walk over to the bus stop, catch a bus to the pizza place, and, if I’m lucky, catch another quick bus home, to be eating my pizza within 45 minutes. That’s reasonable, is it not? The bus schedule is fairly regular, depending on traffic, and I can more often than not successfully get my personal pizza fix by taking the bus.

Now let’s say my love affair with pizza evolves into something more substantial… I decide to start my own pizza delivery service. I’ve now made the leap from entertaining a personal pizza fetish to running a business. Hopefully, I’ll have many hungry people waiting on my pizza, ready to pay hard-earned cash for a slice or pie.

Clearly, depending on the bus to deliver my pizza to paying customers is not going to cut it. The bus may or may not show up when I need it. The fare isn’t reduced if the bus is late, but my tip and recurring business sure will be. I need dependable transportation, such as a 4×4 truck, to run my business. I need confidence that this truck will be waiting on me and can weather events (pun intended), such as a blizzard dropping two feet of snow. People get hungry in snowstorms too.

Rather than delve back into the MFT or EFSS decision, I want to talk about a critical distinguishing characteristic of your file transport solution, be it of either strain. This characteristic is the Service Level Agreement (SLA) offered by your service vendor, if one is offered at all. Just having one is not sufficient; you need to look for three key terms that will let you know if you might be left waiting at the bus stop. Unfortunately, it’s easy enough for service vendors to dress their bus up to look like a 4×4 truck, so let’s go over the key points of an SLA.

  1. Service Availability – This should consist of some number of nines (e.g., 99.9%) and most importantly, spell out service uptime targets and a credit schedule. This credit schedule should outline what the vendor will owe you if the service they contractually agree to falls below their target. Some vendors claim a high level of availability but don’t back that up with a credit schedule. If they miss their target, you have no recourse as a paying customer. In other words, you’ve already paid for the bus, but you’re stuck at the station.
  2. Recovery Point Objective (RPO) – This figure represents the vendor’s target for retrieving your data in the event of a disaster. If a reasonable objective is 30 minutes, the vendor is setting a target of losing no more data than that stored in the 30 minutes preceding an event. For example, while it’s unlikely that a service provider would create tape backups to achieve an RPO, if that were the case, a 30-minute RPO would require tape backups every 30 minutes! Imagine leaving your pizza on the bus for 5 minutes, only to have it disappear. Your bus should guarantee your pizza, if you’re relying on it to do business.  Your pizza wouldn’t disappear if you left it in your truck, would it?
  3. Recovery Time Objective (RTO) – This figure represents the vendor’s target for being able to return access to your data and the service in the event of a disaster. Without an RTO, your service provider is not giving you any indication of how long the service might be down in the event of a disaster. For example, if the objective is 3 hours, the vendor is setting a target of no more than 3 hours to return access to your data via its service, following a disaster. This is effectively the equivalent of the bus service telling you by what time it will resume service. One consumer-grade EFSS service provider that doesn’t compensate users for outages – even paid subscribers – achieved an estimated 99.63 uptime for the second half of 2012, according to an independent news site covering cloud software for Australian and New Zealand businesses. It was down 16 hours in January of 2013, and another 90 minutes in May. Can you afford to be stuck at the station for 16 hours?

Service vendors may offer 100% network uptime, but it’s critically important to understand the stated application uptime. Premium cloud infrastructure environments can be up 100% of the time, but access to the hosted application is all that should matter to you. Are vendors really telling you that the application will never, ever, not even once, for a half-second, be unavailable? If it sounds too good to be true, make sure you understand the claim and have some recourse. If a service provider truly wants to be your business partner, it should be willing to share the pain in the case of an outage.