WhatsUp Gold software products have recently been certified under the Common Criteria Evaluation and Validation Scheme (CCEVS). (see today’s announcement for more details).

Most folks call it Common Criteria. If you are not familiar, it’s an internationally recognized standard. It allows organizations to confidently assess the security and assurance of IT software. Specifically, to ensure they meet an agreed-upon security standard for certain government deployments.

CCWith Common Criteria certification in place, our customers have the added confidence that our WhatsUp Gold products have been validated against rigorous security standards. These include user data protection, fault tolerance and authentication.

There’s a lot of work involved. We worked with an authorized third party. Their approach was rigorous and standard and repeatable manner at a level that is commensurate with the target environment for use.

What’s significant about Common Criteria certification for you? It might just get a little easier to procure and use WhatsUp Gold.

If you work for a U.S. Federal government agency:

A U.S. Federal mandate requires that security evaluations of IT products are

  • Performed to consistent standards
  • Encourage the formation of commercial security testing laboratories
  • Meet the needs of government and industry for cost-effective evaluation of IT products
  • Improve the availability of those products

If you work at any organization in any of these 27 countries:

The Common Criteria Mutual Recognition Arrangement has 27 member countries. It includes all of North America, most of Europe, Australia, Israel and beyond. The arrangement leverages the use of Common Criteria certificates by each member nation so that products can be procured without the need for further evaluation. 

WhatsUp Gold software products that now meet Common Criteria standards include:

We’ve been making lots of noise in the security space this year. Last month we joined the Open Web Application Security Project (OWASP).  Additionally, MOVEit® Managed File Transfer software achieved Payment Card Industry Data Security Standard (PCI-DSS) certification.

 

 

crime scene no keyboardNews broke yesterday afternoon that a group of hackers had compromised file transfer servers at several leading organizations after obtaining credentials for thousands of FTP sites. According to the report, hackers were even able to upload several malware program files to an FTP server run by the NYT and picked up a list of unencrypted credentials from an internal computer. A big concern there – and in particular for an organization with a large email database like NYT’s – is that those files could be incorporated into malicious links that could be used in spam messages.

My initial reaction: how is FTP security still making headlines in 2014? And secondly: hacks like this are exactly why people are more carefully evaluating their use of file transfer and in some cases, moving away from FTP to other versions of file transfer that more clearly suit their needs.

FTP servers are online repositories where users can upload and download files, and they’re designed to be accessible remotely via login and password. In some FTP set-ups, files remain there unencrypted and susceptible to foul play should credentials be obtained by the bad guys, which is the case here.

Reading deeper into the story, we can glean a few things about the compromised data in the FTP servers:

1) It was unencrypted, and therefore an immediate leak would not require much additional work by hackers. Any organization transferring sensitive data should use encryption while data is in motion and at rest.

2) Once one server gets hacked, others follow  – What was hacked was most likely an application that housed the credentials insecurely or maybe a programmer who was working on that application clicked a link that scraped his machine for the passwords.  Then the hackers could access new sites using those passwords and so on, and so on.

3) It’s unclear if the data was used for destructive purposes, i.e. the spamming example I mentioned above. Because most FTP servers offer poor reporting and auditing features, it can be difficult to piece back together what the attackers did once inside the FTP.

Additionally, the FTP passwords must have been stored in clear text or encrypted with a sloppy algorithm or lazy key management. This is inexcusable in today’s digital age. These organizations could have salted and hashed its passwords, greatly improving their security.

In summary, there are a few critical steps your business can take to decrease file transfer risk:

1)      Make sure to store credential information securely and encrypted with diverse, complex, and numerous keys.

  • Only use secure protocols for transfer
  • Salt and Hash passwords, never store the actual password
  • Disable anonymous access (if allowed at all)
  • Require multi-factor authentication (with certificates, smart cards or IP address limits)

2)      Check the file’s payload.

  • Scan files for viruses and malware on upload
  • Limit the file types that can be uploaded (no .htm, .php, .vbs, .exe, etc.)

3)      Make sure to have good reporting and auditing of suspicious logins.

4)      Protect your file transfer server

  • Frequent penetration tests
  • Frequent vulnerability scans
  • Static code analysis
  • Store files encrypted so they cannot be easily executed in the servers host OS

5)      Ensure your teams, all of them, are aware about security and not to click on things from dubious sources. All it takes is one click on one bad link to create a breach.

FTP has been around for more than 40 years, and we continue to see breaches like these on a regular basis. Simply put, companies need to carefully evaluate their systems to make sure their usage of technology maps to their needs. I guess I shouldn’t be surprised that data breaches via FTP still occur today, but more organizations should understand the risks involved, and seek solutions that improve all aspects of file transfer.

rsa conferenceNothing ever stays the same in the world of information security. Each day we see new threats and challenges, along with new solutions, tactics and approaches. Despite the ever-changing nature of the space, there are however a few constants – one of them being the annual RSA Conference.

Considered by many (myself included) to be the premier IT security event, RSA features keynotes and sessions from some of the world’s foremost experts – including those from business, government and academia. If you’re interested in being among the first to know about a particular topic or trend, this is the place to be. In fact, it’s where I’ll be in just a few short days.

So what am I looking forward to the most? Here are five things in no particular order:

1) New Insights on Cloud Security: If you scan the RSA Conference 2014 tracks, you’ll notice that cloud security is getting a fair amount of attention – and for good reason. After realizing the benefits of adopting the cloud (cost, efficiency, etc.) organizations quickly discover the challenges and concerns, which almost always center on security. While we have our own take on this matter, I’m interested to hear what others have to say. Thus, some of the sessions I’m most looking forward to include Is the Cloud Really More Secure Than On-Premise?, Virtualization and Cloud: Orchestration, Automation and Security Gaps and Trust Us: How to Sleep Soundly with Your Data in the Cloud.

2) The Networking: The RSA Conference is well-known for attracting some of the best and brightest from a wide range of industries – and this year’s conference will be no exception. Here are a few of the featured speakers that I’m hoping to catch:

  • Selim Aissi, Vice President, Global Information Security, VISA
  • Marene Allison, Global Chief Information Security Officer and World Wide Vice President of Information Security, Johnson and Johnson
  • Bob Blakley, Global Head of Information Security Innovation, Citigroup
  • Mary Ann Davidson, Chief Security Officer, Oracle
  • Scott Andersen, Director, Global Information Security, Citi
  • Bret Arsenault, Chief Information Security Officer, Microsoft Corporation
  • Joseph Demarest, Assistant Director of the Cyber Division, FBI
  • Eran Feigenbaum, Director of Security, Google Apps, Google

3) Stephen Colbert: I’m not sure how much Stephen Colbert knows about information security, but I’m not sure that it matters. As a long-time fan of the Colbert Report, I was thrilled to find out that he’ll be one of the featured keynote speakers. Who says that information security isn’t funny?

4) Alternate Realities: Here at Ipswitch, we tend to discuss file transfer security, compliance and other matters through the lens of a business. But at this year’s conference, we’ll get to see how security is viewed by large government organizations like the FBI, as well as that of venture capital firms, economists, academics and other personas those of us in the business world sometimes forget about. If you’re looking to expand your understanding of information security, there’s no better place to be than the RSA Conference.

5) The Food: This year’s event will be held in San Francisco, a haven for foodies like myself. Thus, I’ve already spent a considerable amount of time on Yelp scoping out restaurants and other hotspots. Clearly this is important to me. I’ll be coming back with a renewed appreciation for the importance of information security, but also a few good meals. Thankfully, they only hold this event once per year.

********

What are you looking forward to seeing at this year’s RSA Conference? Be sure to let us know in the comments section. Or let me know your recommendations for must-eat restaurants!

I am keeping up with the Olympics at home. But I suspect some of my bandwidth hoarding colleagues are catching some of the competition while at work. With Sochi 9 hours away it is reasonable to think that many folks are trying to catch what’s happening before they can catch it on NBC. Now think about the fact that March Madness (here in the U.S.) is quickly approaching. Wireless network bandwidth hoarding may be quickly becoming a national pastime. A headache for businesses and universities. Ashley-Wagner-Face

No IT pro wants to see folks reacting to slowed access like Ashley Wagner did when seeing her scores from the judges last week. Fear not, there’s no need for grumpy cats to growl at BYOD when IT pros can truly understand the source of wireless performance problems like:

– Users who experience poor performance from oversubscribed access points or poor signal strength

– Business critical applications that get bogged down by bandwidth hoarding. Including folks accessing unauthorized music, video, or gaming apps. (or watching Olympics or March Madness)

– Increased network density from BYOD that may surpass your initial wireless network deployment

– The impact on security with the exposure to rogue access points.

These performance problems require you having the ability to best determine how to redeploy, update, and protect your wireless network. This will let you handle what’s happening now and lay a strong foundation for the future.

How you can gain these abilities is something my colleagues will be glad to teach you more about during an upcoming webinar. On Tuesday, February 25 join us for “4 Ways Network Monitoring Improves Health of Networks”.  Register here to catch the webinar at 8am US ET that Tuesday, or here if you care to join at 2pm US ET.

And like the Olympics and March Madness, there’s always a replay.

debunkedThe world of cyber-security is just as turbulent as ever. In just the past few weeks, we’ve witnessed major credit card security breaches at Target, Neiman Marcus and Michaels – three of the world’s top retailers. While the media has largely focused on how this affects consumers, there’s another discussion taking place behind the scenes, and that’s within the IT departments of almost every organization that handles credit card information.

The topic? PCI Compliance.
By definition, if a business processes credit card or debit card payments they must adhere to the regulations of the Payment Card Industry (PCI). Pretty straightforward, right? Wrong. Despite the mandate, there remains a great deal of confusion on the part of businesses (large and small) as to what PCI compliance actually entails. Fortunately, much of this misunderstanding falls into one of our four major myths of PCI Compliance. Let’s take a closer look at more than just the facts.

Myth #1: Compliance Equals Certification.
In January 2014, Ipswitch became the first to announce an official PCI-Certified, cloud-based MFT solution with its MOVEit Cloud Environment. The important word in that sentence is “certified.”

Most businesses don’t realize that there’s a difference – and a significant one – between being PCI compliant and PCI certified. It’s fairly easy to achieve PCI compliance. All that’s required is the completion of a self-assessment questionnaire. It usually takes about a half day and a pinky-swear promise.

Certification against PCI Data Security Standard (DSS) V2.0, on the other hand, is a much more comprehensive process, involving a full-scale audit by a qualified security assessor (QSA) and covering roughly 288 controls. These include detailed reviews of how software is developed; how engineers were trained; daily reviews of more than 200 different streams of audit events and a fully documented software development lifecycle. In all, it’s a process that gets allotted about a half year to complete.

It’s important to note that there is essentially no difference in the requirements of PCI certification and compliance. The difference is in who verifies them and how well-documented the evidence must be.

Essentially, it’s best to think of compliance as a claim, and certification as proof.

Myth #2: PCI Compliance is a Technical Problem.
It’s fairly common for businesses to believe that all it takes to avoid PCI-related issues is the right set of features – encryption, anti-virus protection or some other security voodoo along those lines. They see it as a purely technical matter, when in fact; it has significantly more to do with people, policies and processes.

In fact, our QSA spent considerably more time reviewing our written policies, training documents and other formal documentation than they did reviewing our code.  They resorted to using automated tools for that arduous task.

This tends to come as surprise to most retail organizations, especially those who fail a PCI audit. They find that it wasn’t the result of poorly-written code, but rather the result of coding that was poorly documented. It wasn’t the fault of a programmer, but rather in the lack of materials showing how they were trained, or how the process that meets a given PCI regulation.

The lesson here: If your business wants to stay in compliance with PCI requirements, it starts with your policies and procedures. The technical aspect is not as monolithic as you may have been led to believe.

Myth #3: PCI Compliance is Forever.
Retailers would desire to look at PCI compliance the way most of us view our driver’s license: Pass the test once and you’ll never need to take it again. Of course, it doesn’t work that way. Not only are the threats evolving on a day-to-day basis (more on this in a moment) but the PCI targets themselves are being updated and amended. As such, PCI compliance should always be viewed as an on-going objective; a process of continuous improvement.

Here’s a good example: It’s not enough that PCI certified businesses must renew their certifications annually, there are also mandated scans quarterly to ensure flaws haven’t crept in.  Daily operations at these companies must adjust to require audit log review on a daily basis to demonstrate the proper controls are in place.

These examples reinforce the notion that PCI compliance is not a one-and-done assignment to be crossed off a checklist. Rather, it’s a set of practices that fundamentally changes the way your business operates.

Myth #4: Enterprise Compliance is easier to manage in-house.
When we announced that the MOVEit Cloud environment became the first of its kind to be PCI certified, it caught more than a few people off guard – and for good reason. Up until then, a cloud-based file transfer solution that was also PCI certified was practically unheard of. At Ipswitch, we think it makes perfect sense.

We crafted a team to create a cloud infrastructure comprising state-of-the-art protection controls, and follow every single PCI-DSS regulation, down to the letter.  The disciplines required to deliver this level of confidence as a service are sometimes difficult to replicate in an already overtasked IT department.

So, can a cloud-based managed file transfer system offer just as much security as your legacy system? Absolutely. Is it easy or cost-effective to maintain and secure your own system? Not so much.

As companies begin to understand the capabilities of the cloud – and how it can meet and exceed their enterprise-grade security requirements – secure, compliant managed file transfer becomes merely a checkbox for your auditors.

We hoped to have cleared up a few major myths surrounding PCI compliance, but as you could imagine, there are many more. What are some common myths that you’ve encountered when it comes PCI compliance? Be sure to let us know in the comment section.

derek-brink--security-file-transferIn The Business Case for Managed File Transfer – Part I, a back-of-the-envelope calculation based on the findings from Aberdeen’s research showed the following advantage for companies that use managed file transfer (MFT) solutions, compared to companies that don’t:

Performance Metrics (average over the last 12 months)

MFT
Users

MFT
Non-Users

MFT Advantage

Errors / exceptions / problems,
as a percentage of the total annual volume of transfers

3.3%

4.5%

26%

Time to correct an identified error / exception / problem

81
minutes

387 minutes

4.8-times

Annual cost of lost productivity for senders, receivers, and responders affected by errors / exceptions / problems

$3,750

$23,975

6.4-times

It’s very tempting to simply stop the analysis here – how much more compelling a business case in favor of MFT does there need to be?

But think about this: when we work with averages in this way, there is by definition a 50% likelihood that the actual values will be higher than those that we used in our calculations, and a 50% likelihood that they will be lower. Said another way, there’s virtually no chance that our calculations will end up being precisely right.

When you really think about it, our previous analysis tells us almost nothing about the reduction in file transfer risks from using a MFT solution – remember that risk is defined as the likelihood of the issues, as well as the magnitude of the resulting business impact. If we aren’t talking about probabilities and magnitudes, we aren’t talking about risks! It should make us consider how useful to the decision-maker our previous analysis really is.

The solution to this problem is to apply a proven, widely-used approach to risk modeling called Monte Carlo simulation. In a nutshell, we can carry out the computations for many (say, a thousand, or ten thousand) scenarios, each of which uses a random value from our range of informed estimates, as opposed to using single, static values. The results of these computations are likewise not a single, static number; the output is also a range and distribution, from which we can readily describe both probabilities and magnitudes – that is, risk – exactly what we are looking for!

Applying this approach to the assumptions used in Part Ifeel free to go back and refresh your memory – results in the following:

INPUTS

Lower Bound

Upper Bound

Mean

Units

Distribution

Annual volume of file transfers

1,000

1,000

1,000

transfers

n/a

Number of errors, exceptions, or problems as a % of annual volume
MFT non-users

1.0%

8.0%

4.5%

issues / 1,000 transfers / year

normal

MFT users

0.0%

8.0%

4.0%

issues / 1,000 transfers / year

triangular

Time to respond, remediate, and recover
MFT non-users

0.083

13.0

6.54

hours

normal

MFT users

0.083

3.0

1.54

hours

uniform

Number of working hours per employee per year

2,080

2,080

2,080

hours / employee / year

n/a

Cost of lost productivity for users
Number of users affected by issues

2

2

2

employees

n/a

Fully-loaded cost per user per year

$50,000

$250,000

$150,000

$ / employee / year

triangular

% of user productivity lost during time to respond, remediate, recover

10%

60%

35%

% of downtime

normal

Cost of responders
Fully-loaded cost per responder per year

$50,000

$150,000

$100,000

$ / employee / year

normal

% of responder productivity lost during time to respond, remediate, recover

100%

100%

100%

% of downtime

n/a

Using a Monte Carlo model to carry out exactly the same calculations as before – only this time over 10,000 independent iterations – yields the following comparison of MFT users and MFT non-users:

derek brink companies using MFT

It can be a little tricky at first to read this chart, so I have tried to summarize some of the information it provides in the following table:

For every 1,000 annual file transfers, there is a(n)

MFT Non-Users

MFT Users

MFT Advantage

80% probability of the annual cost being greater than

$7,000

$600

91%

50% probability of the annual cost being greater than

$20,500

$2,250

89%

20% probability of the annual cost being greater than

$41,500

$6,000

86%

Note that at the 50% likelihood level, these values are similar (but lower) than those from our previous, back-of-the-envelope approach – this is because the Monte Carlo model uses a more accurate, non-symmetrical distribution (i.e., a triangular distribution) for the fully-loaded cost of senders and receivers. This reflects the reality that the majority of enterprise end-users are at the lower end of the pay scale, while still accommodating the fact that incidents will sometimes happen to the most highly-paid individuals. This is yet another reason why we should think more carefully about using simple means (averages) in our analysis!   Taken as-is, we can use this information to advise our business decision-makers using risk-based statements such as the following:

  • For every 1,000 file transfers, we estimate with 80% certainty that the annual business impact will fall between $2,000 and $56,000 for MFT non-users … and that it will fall between $500 and $8,500 for MFT users
  • For MFT non-users, we estimate an 80% likelihood that the annual business impact will be less than $41,500 … but for MFT users, there’s an 80% likelihood that it will be less than $6,000

Remember that my comments from the previous blog still apply: this analysis incorporates some, but not all, of the associated costs – so the actual risk is understated. But if this wasn’t already a sufficient business case for a MFT solution, we could easily go ahead and estimate additional costs related to errors, exceptions, and problems with file transfers, such as loss of current / future revenue, loss or exposure of sensitive data, and repercussions of non-compliance. I haven’t attempted to model these costs here, but it seems clear enough that if we did then the gap between MFT users and MFT non-users would grow even wider.

Remember also, these calculations were done on a volume of 1,000 file transfers per year – you can easily scale these up to reflect your own environment. It’s pretty easy to see that it doesn’t take very much volume to justify the cost of implementing and supporting an MFT solution. (In fact you might even save in operational costs, from the benefits of having a more uniform and efficient file transfer “platform”.)   The essential point is that we can use these proven, widely used tools to help to make better-informed decisions about file transfers that are based on our organization’s appetite for risk. As security professionals, this means that we will have done our job – and in a way that’s actually useful to the business decision-maker.

You also may be interested in the Aberdeen White Paper with this underlying research “From Chaos to Control: Creating a Mature File Transfer Process,” as well as these audio highlights from a recent webinar on this same topic of quantifying the benefits of Managed File Transfer.

A university network supports a broad population of students, faculty and others who all rely on a wireless network to do their work. Consider the user population. A big segment of it grew up with the Internet. And they have little patience for dead spots that don’t provide access to it.

A customer of ours works at a large university in Ohio. There are no less than 2,700 access points on his wireless network. Before he started using WhatsUp Gold from Ipswitch, his team had to physically check wireless network equipment around campus whenever there was a problem. It was wearing patience his patience thin. And the soles on his IT staffs’ sneakers. This meant long wait times to resolve issues, and way too many calls made and tickets opened by melodramatic students. student_hero20110208

The challenge was to support a group of vocal users who, in some respects, were causing the problems they complained about. There’s an average of three mobile devices per student attached to the network. Vimeo, torrents, and every other bandwidth hog you can imagine steams through the pipes. In other words, it was a BYOD free for all and the IT staff had to keep Internet wireless network connections going strong in light of the chaos.

When our customer decided enough was enough he looked for a product that provided the wireless network performance monitoring features he needed most, and it had to be affordable. He wanted a the ability to accurately map his wireless network, see individual bandwidth usage, check signal strength, and get real-time alerts whenever a problem flared up. After giving WhatsUp Gold a trial run along with a few other vendors’ software products, he chose Ipswitch because it met his criteria and his price point. Since using the product, the phone rings a lot less and sneakers last a lot longer.

If your work involves managing wireless access on a network in higher education, or anywhere else for that matter, please register and join our webinar this Thursday, February 6. During the 30 minute webinar you’ll learn how to best manage the high traffic tides, quickly and easily identify bandwidth hogs and the offending applications, and receive notifications when access points approach capacity.

Hope to see you there. If you can’t make it, we’ll be sharing the replay afterwards.

Title: How to Overcome Challenges of Campus Wireless Network Performance
Date:
February 6, 2014
Show Time: 2:00 pm EST
Duration: 30 Minutes
Register Here

 

derek-brink--security-file-transferIn a webinar I participated in recently with Ipswitch File Transfer I shared the following from an analysis and comparison of companies that use managed file transfer (MFT) solutions, and companies that don’t:
Performance Metrics
(last 12 month avg.)

MFT
Users

MFT
Non-Users

MFT Advantage

Errors / exceptions / problems,
as a percentage of the
total annual volume of transfers

3.3%

4.5%

26%

Time to correct an identified
error / exception / problem

81
minutes

387 minutes

4.8-times

The comparison is easy enough to understand: MFT users experienced 26% fewer errors, exceptions, and problems as a percentage of the total annual volume of transfers, and they were 4.8-times faster to get going again when an error, exception, or problem did occur.

This is nice information to have for marketing purposes, but what does it really mean for the business?

A couple of quick, back-of-the-envelope calculations based on these findings shed some interesting light on this question:

  • Let’s base our analysis on an annual volume of 1,000 file transfers. This makes it easy for you to personalize for your own particular environment – for example, if your annual volume is 10,000 transfers, you can simply multiple these results by 10.
  • Let’s assume that the average percentage of errors, exceptions, and problems is as shown above
  • Likewise, let’s assume that the average time to correct errors, exceptions, and problems is as shown above
  • A simple computation leads us to the following:
    • 1,000 transfers * 3.3% * 81 minutes = 2,711 minutes lost per year for MFT users
    • 1,000 transfers * 4.5% * 387 minutes = 17,331 minutes lost per year for MFT non-users

Now, let’s think about the cost of that lost time. In a person-to-person scenario, there are at least two people affected – and arguably three:

• The sender of the file loses at least some of their productivity
• The receiver of the file loses at least some of their productivity
• In addition, the issue may require the involvement of an additional person to help respond, remediate, and recover – and this responder loses all of their productivity

For the sake of this back-of-the-envelope calculation, let’s further assume:

  • The fully-loaded cost per person is $50 per hour
  • Both sender and receiver lose one-third of their respective productivity for the time the issue remains uncorrected (e.g., they can still do other work)
  • The responder, however, loses 100% of their productivity for the time the issue remains uncorrected
  • A simple calculation leads us to the following:
    • 2,711 minutes * 1 hour / 60 minutes * $50 / hour * (1/3 + 1/3 + 1) = $3,750 lost per year for MFT users
    • 17,331 minutes * 1 hour / 60 minutes * $50 / hour * (1/3 + 1/3 + 1) = $23,975 lost per year for MFT non-users

This is a 6.4-times advantage for MFT users, for the cost of lost productivity alone!

If this wasn’t already a sufficient business case for a MFT solution, we could also estimate additional costs related to errors, exceptions, and problems with file transfers, such as:

  • Opportunity costs
    • Loss of current revenue
    • Loss of future revenue
    • Inability to carry out the organization’s mission
  • Costs associated with the loss or exposure of sensitive data
  • Costs associated with non-compliance

I won’t attempt to quantify these costs here, but it seems clear enough that if we did then the gap between MFT users and MFT non-users would grow even wider – e.g., Aberdeen’s research confirmed that compared to MFT non-users, MFT users had fewer security incidents (e.g., data loss or exposure), fewer non-compliance incidents (e.g., audit deficiencies), fewer errors and exceptions, and fewer calls and complaints. As if we needed any more convincing.

Remember, these calculations were done on a volume of 1,000 file transfers per year – you can easily scale these up to reflect your own environment. It’s pretty easy to see that it doesn’t take very much volume to justify the cost of implementing and supporting an MFT solution. (In fact you might even save in operational costs, from the benefits of having a more uniform and efficient file transfer “platform”.)

Another thing we might want to do with Aberdeen’s research findings is to show how MFT users have actually reduced their risk compared to that of MFT non-users – using the proper definition of risk, which has to do with the probability of an error, exception, or problem and the magnitude of the corresponding business impact. The results of that more sophisticated analysis would not be a single, static number (such as the ones we derived above), but a more realistic range of values that would support making business decisions about file transfer based on the organization’s appetite for risk.

In my next post I will dig deeper into the business case for MFT by using a proven, widely-used approach to risk modeling called Monte Carlo simulation.

You also may be interested in the Aberdeen White Paper with this underlying research “From Chaos to Control: Creating a Mature File Transfer Process,” as well as these audio highlights from a recent webinar on this same topic of quantifying the benefits of Managed File Transfer.