I’m excited to share the news of our release of Ipswitch Analytics, a new reporting and monitoring solution for MOVEit™ Secure Managed File Transfer. Ipswitch Analytics ensures reliable, cost-effective and auditable file transfers. IT teams gain deep insight into business critical file transfers through an innovative analytics engine that includes an interactive activity monitor, automated report creation and distribution, and fine-grained access control.

Ipswitch Analytics accesses and consolidates data from all MOVEit File Transfer (DMZ) and MOVEit Central servers. Authorized users are able to monitor MOVEit activity in popular web browsers from any device. Ipswitch Analytics also simplifies the audit process by managing workflow, transfer, security and audit activity in one centralized location.

Ipswitch Analytics  (click on image to learn more!)
Ipswitch Analytics (click on image to learn more!)

With Ipswitch Analytics, businesses can:

  • Ensure reliable file transfers: Track MOVEit performance indicators such as successful transfers by end-point, peak load performance, and total throughput trends.
  • Automate reporting and distribution for Service Level Agreements (SLAs) and policy compliance: Customize reports, establish distribution lists with fine-grained access control and schedule auto-generated reports for delivery. Email alerts are sent to authorized users as reports are generated.
  • Manage workflow, transfer, security and audit activity in one place: Maintain a single view of all MOVEit File Transfer (DMZ) and MOVEit Central Servers activity. Manage key parameters for all file transfer processes such as transfer status, user access, encryption, and file formatting to make data-driven decisions.
  • Simplify the audit process: Use out-of-the-box report templates, or easily create custom reports. Drag-and-drop elements to organize presentation of key metrics to validate compliance with SLA, regulatory and corporate policies.

Feature details:

  • Report Templates – Over 50 out-of-the-box templates to manage workflows, transfers, security and audits. Ease creation of reports by starting from pre-defined templates.
  • Policy Management – Create policies to manage user access. Restrict user’s ability to view data associated with defined organizations, servers, or users.
  • Browser-based UI – Ipswitch Analytics browser-based UI can be accessed by any authorized user from any device or desktop via popular web browsers. The Ipswitch Analytics HTML 5 interface allows dynamic update of data and offers a drag-and-drop user experience.

>>> For more information about Ipswitch Analytics please visit: http://bit.ly/Ipswitch_Analytics.

 

 

'You are Fired!'The popularity of consumer file-sync-and-share solutions such as Dropbox continues to grow, as consumers appreciate the ease with which they’re able to transfer large files, such as photos and videos, to family and friends. While beneficial to consumers, these applications are problematic for IT departments. More and more employees use Dropbox to share corporate files, and don’t fully understand the risk. Organizations must do a better job of warning employees that using online file sharing tools to share sensitive files at work can result in serious penalties, and even termination. Let’s take a look at why:

1. Operating in the shadows.

Companies’ IT departments aren’t able to track when an employee accesses Dropbox to share files and are unable to control which employee devices are able to sync with a corporate computer. This practice, often called “shadow IT,” effectively locks the IT department out of the file-sharing activities of employees. As a result, IT departments are unable to track how files have been modified, determine who has viewed files if sensitive information is leaked, or remotely wipe Dropbox if an employee’s device is stolen.

2. Potential for data theft.

Dropbox has limited security features, and because companies aren’t able to monitor what files are synced to what device, it’s impossible to know whether data has been shared with or accessed by the wrong party, which increases risk of insider threats and data theft.

3. Data loss.

Dropbox has been known to lose customer files (source this) – or fail to back them up at all – meaning that employees run the risk of permanently losing company files, with no way for the IT department to recover them.

4. Adherence to compliance regulations.

Many industries have compliance regulations which dictate that certain files have limited access or remain encrypted during transfer. Because Dropbox is not equipped with secure file regulation capabilities, there is an increased risk that employees are unknowingly violating their company’s compliance requirements.

5. Limited data security.

All employees know that it’s important to protect sensitive files such as financial data or intellectual property documents. Yet Dropbox has limited encryption and security features, which leaves data exposed and at risk of being corrupted or landing in the wrong hands.

While Dropbox and other online file sharing tools are sufficient for sending personal files, these systems simply aren’t capable of securely managing corporate file transfers. There’s certainly a demand among employees for reliable, user-friendly file transfer options, and IT departments should look to meet this need by providing employees with a highly secure alternative, such as Managed File Transfer (MFT) solutions.

blog image previewAs an IT professional, this likely sounds all too familiar: Find a way(s) to keep business processes smooth and secure despite your lack of full control or visibility over the movement of files. As the type of data, threats, transfer scenarios and modes all continue to rise, you are expected to keep it all together – all while managing countless other tasks.

But it’s time to disrupt the status quo and resolve some of the pains of file transfer. Ask yourself if you are currently experiencing any of the following:

  • Inadequate security
  • Lack of control
  • Increasing complexity and time consumption when hunting down reports or missing files
  • Invisibility (not the super hero kind, but the kind when you don’t have full view into the transport of important information)

If you answered yes to any of the above, look no further – managed file transfer might be what you are looking for (and might even make you feel like a super hero*).

*Invisibility not guaranteed

Check out our new Managed File Transfer infographic here and tell us your thoughts. null

Grab the PDF of the MFT Infographic here.

derek-brink--security-file-transferIn The Business Case for Managed File Transfer – Part I, a back-of-the-envelope calculation based on the findings from Aberdeen’s research showed the following advantage for companies that use managed file transfer (MFT) solutions, compared to companies that don’t:

Performance Metrics (average over the last 12 months)

MFT
Users

MFT
Non-Users

MFT Advantage

Errors / exceptions / problems,
as a percentage of the total annual volume of transfers

3.3%

4.5%

26%

Time to correct an identified error / exception / problem

81
minutes

387 minutes

4.8-times

Annual cost of lost productivity for senders, receivers, and responders affected by errors / exceptions / problems

$3,750

$23,975

6.4-times

It’s very tempting to simply stop the analysis here – how much more compelling a business case in favor of MFT does there need to be?

But think about this: when we work with averages in this way, there is by definition a 50% likelihood that the actual values will be higher than those that we used in our calculations, and a 50% likelihood that they will be lower. Said another way, there’s virtually no chance that our calculations will end up being precisely right.

When you really think about it, our previous analysis tells us almost nothing about the reduction in file transfer risks from using a MFT solution – remember that risk is defined as the likelihood of the issues, as well as the magnitude of the resulting business impact. If we aren’t talking about probabilities and magnitudes, we aren’t talking about risks! It should make us consider how useful to the decision-maker our previous analysis really is.

The solution to this problem is to apply a proven, widely-used approach to risk modeling called Monte Carlo simulation. In a nutshell, we can carry out the computations for many (say, a thousand, or ten thousand) scenarios, each of which uses a random value from our range of informed estimates, as opposed to using single, static values. The results of these computations are likewise not a single, static number; the output is also a range and distribution, from which we can readily describe both probabilities and magnitudes – that is, risk – exactly what we are looking for!

Applying this approach to the assumptions used in Part Ifeel free to go back and refresh your memory – results in the following:

INPUTS

Lower Bound

Upper Bound

Mean

Units

Distribution

Annual volume of file transfers

1,000

1,000

1,000

transfers

n/a

Number of errors, exceptions, or problems as a % of annual volume
MFT non-users

1.0%

8.0%

4.5%

issues / 1,000 transfers / year

normal

MFT users

0.0%

8.0%

4.0%

issues / 1,000 transfers / year

triangular

Time to respond, remediate, and recover
MFT non-users

0.083

13.0

6.54

hours

normal

MFT users

0.083

3.0

1.54

hours

uniform

Number of working hours per employee per year

2,080

2,080

2,080

hours / employee / year

n/a

Cost of lost productivity for users
Number of users affected by issues

2

2

2

employees

n/a

Fully-loaded cost per user per year

$50,000

$250,000

$150,000

$ / employee / year

triangular

% of user productivity lost during time to respond, remediate, recover

10%

60%

35%

% of downtime

normal

Cost of responders
Fully-loaded cost per responder per year

$50,000

$150,000

$100,000

$ / employee / year

normal

% of responder productivity lost during time to respond, remediate, recover

100%

100%

100%

% of downtime

n/a

Using a Monte Carlo model to carry out exactly the same calculations as before – only this time over 10,000 independent iterations – yields the following comparison of MFT users and MFT non-users:

derek brink companies using MFT

It can be a little tricky at first to read this chart, so I have tried to summarize some of the information it provides in the following table:

For every 1,000 annual file transfers, there is a(n)

MFT Non-Users

MFT Users

MFT Advantage

80% probability of the annual cost being greater than

$7,000

$600

91%

50% probability of the annual cost being greater than

$20,500

$2,250

89%

20% probability of the annual cost being greater than

$41,500

$6,000

86%

Note that at the 50% likelihood level, these values are similar (but lower) than those from our previous, back-of-the-envelope approach – this is because the Monte Carlo model uses a more accurate, non-symmetrical distribution (i.e., a triangular distribution) for the fully-loaded cost of senders and receivers. This reflects the reality that the majority of enterprise end-users are at the lower end of the pay scale, while still accommodating the fact that incidents will sometimes happen to the most highly-paid individuals. This is yet another reason why we should think more carefully about using simple means (averages) in our analysis!   Taken as-is, we can use this information to advise our business decision-makers using risk-based statements such as the following:

  • For every 1,000 file transfers, we estimate with 80% certainty that the annual business impact will fall between $2,000 and $56,000 for MFT non-users … and that it will fall between $500 and $8,500 for MFT users
  • For MFT non-users, we estimate an 80% likelihood that the annual business impact will be less than $41,500 … but for MFT users, there’s an 80% likelihood that it will be less than $6,000

Remember that my comments from the previous blog still apply: this analysis incorporates some, but not all, of the associated costs – so the actual risk is understated. But if this wasn’t already a sufficient business case for a MFT solution, we could easily go ahead and estimate additional costs related to errors, exceptions, and problems with file transfers, such as loss of current / future revenue, loss or exposure of sensitive data, and repercussions of non-compliance. I haven’t attempted to model these costs here, but it seems clear enough that if we did then the gap between MFT users and MFT non-users would grow even wider.

Remember also, these calculations were done on a volume of 1,000 file transfers per year – you can easily scale these up to reflect your own environment. It’s pretty easy to see that it doesn’t take very much volume to justify the cost of implementing and supporting an MFT solution. (In fact you might even save in operational costs, from the benefits of having a more uniform and efficient file transfer “platform”.)   The essential point is that we can use these proven, widely used tools to help to make better-informed decisions about file transfers that are based on our organization’s appetite for risk. As security professionals, this means that we will have done our job – and in a way that’s actually useful to the business decision-maker.

You also may be interested in the Aberdeen White Paper with this underlying research “From Chaos to Control: Creating a Mature File Transfer Process,” as well as these audio highlights from a recent webinar on this same topic of quantifying the benefits of Managed File Transfer.

Just what is managed file transfer (MFT)? It’s easy to think of MFT as little more than file transfer on steroids, or a super slick FTP server. But MFT is more than that because the problems IT administrators solve with MFT demand more. Our customers don’t move files for fun – they move files to get work done.

MFT is a category of middleware software that ensures reliable, secure and auditable file transfer to enable critical business process. But even though File Transfer is at the core of MFT, it’s the M in MFT that sets the category apart.

Back in the Day…

There was a time when an organization in need of file transfer infrastructure would reach for a basic FTP server by default. That was the answer if you needed to make files available to partners, create a space where partners could drop files into a process, and script all around those activities to keep things moving while maintaining some sense of security. But as file volumes went up, and the range of processes that involve file exchange broadened, so too did the number and variety of software solutions that could help to accomplish the goal.

In recent history, we have seen the emergence of a new category, so called Enterprise File Synchronization and Sharing (EFSS). This category of mostly personal tools helps individuals share files between their myriad devices, including smartphones, tablets, home and work computers. While easy for end users, these mostly cloud services have become a real problem for IT departments. That’s because the simplicity, openness, and device-friendliness they allow come at a real costs to the control, visibility, and security protections that are the IT department’s responsibility.

New Demands for Security and Compliance

In addition to pleasing end users, IT also has to please the businesses they serve, and on that side of the ledger things have grown more complex too. Today, the variety of business processes that depend on reliable file transfer is up and the volume of transfer activity is up. The need to manage all of this activity under a tighter security and compliance regimen means nothing can be left to chance.

Where simple FTP was once sufficient, today IT has to reach for more capable infrastructure that mixes the end-user simplicity of EFSS with the reliability of FTP and the business-process focus of integration middleware. But they need to do this in a way that doesn’t inadvertently make what has traditionally been a solvable problem into a messy, bespoke custom development situation. The last thing they want to do is to engage “solutions vendors” with their bag of forty tools, complemented by expensive internal developers and systems integrators.

This is where MFT fits in.

MFT is a purpose-specific class of middleware focused on the reliable transfer of files between business parties, using simple, secure protocols and easy-to-understand models of exchange. But it’s fortified with security, manageability, scalability, file processing and integration, and business-reporting options that allow IT to deliver more sophisticated, controlled file-transfer solutions without slipping into the custom-code abyss.

What is Managed File Transfer
Future posts will look at each of the components of Managed File Transfer (MFT)

In a series of upcoming posts, I and my colleagues will explore each facet of MFT, including:

  • Tools for end-user access: ­ The ways users can participate in MFT-driven business processes using the skills they already possess, and tools that leverage already familiar activities, like sending email attachments or working in local folders.
  • File-transfer automation and workflow: ­ Explores the ways that file transfer can be put to work, either through the handling and preparation of files for further processing, or the standards-based handoff of files, metadata, or both to the next step in a business process.
  • Reporting and analytics: ­ Will look at the importance of visibility into the volume, history, and current activity of a 24/7 MFT flow into and out of your business, and the importance of end-to-end visibility in linking that traffic to your business.
  • MFT administration: ­ Will explore a range of topics, from security and compliance to topologies that deliver high availability, performance under load, and efficiency of operations.

So stay tuned…

As a product manager of an integrated solution suite, it’s interesting to compare and contrast the similarities and differences between traditional systems management (OS deployment, inventory, software delivery, patching, monitoring) and its major trends (security, virtualization, cloud, efficient data centers) with network management (deployment and configuration, backup/restore, monitoring, traffic analysis, Quality of Service) and networking trends (mobile devices, cloud, virtualization, larger networking demands). There are many similarities between these two IT focus areas and I will “blog” about several aspects as I tie-in and compare systems management with network management over the next year. One similarity that is particularly easy to spot and “leaps off the page” for me relates to discovery. In fact, it ALL starts with discovery.

By obtaining a complete and accurate discovery of your networking “stuff,” you will gain immediate benefits. The first premise here is that, until you know what you have (i.e. your stuff), where it is, and how it is connected, you cannot determine the best course of action to improve services, plan for new capacity, uptime, planned outages, or anything for that matter. Performing a regularly scheduled discovery of your devices will provide benefits that trickle into every other aspect of network management, and IT services in general.

The second premise is that the discovery process should be automated. Let’s face it, we live in a day and age where automation can and should be your best friend. Automation allows an IT administrator to remove the mundane and really boring daily tasks from his/her “to-do” list and to focus on things that add value. Back in the late 90’s, while working in IT at a local private liberal arts college, we performed what I call a “clipboard” inventory 2 times a year. The fact was that our manual inventory was inaccurate the moment we left the professor’s office. Add to that the notion that we could only gather some of the most basic inventory details: CPU, RAM, Network card, Add/Remove Programs. The level of detail that can be obtained today in an automated fashion is very complete and can be adapted to gather almost any piece of electronically stored information on a device. Don’t waste any more time doing manual discovery/inventories!

The third premise is that you need a management system that provides “out-of-the-box” reporting and mapping capabilities that easily and intuitively show discovered devices, their attributes, and their connectivity.  The system should allow the flexibility to generate your own custom reports as needed. As a really cool bonus feature, the reports and maps should also dynamically update as new discoveries are performed so that you not only know how your network looks like right now but also easily visualize to how it is performing.

Imagine going from a world of clipboard inventory, 2 times a year, to a fully automated discovery complete with a dynamically updated map of your network. Does it get any better than that? Possibly not, but then again the only constant with technology is change.

As we begin our discussion on how to provide great IT services, I hope you will start to think about, and hopefully act upon, the premise that “it ALL starts with discovery”.

P.S. As a public service announcement, I am providing you with a product link that can dramatically assist with the process of discovery/mapping and meets every requirement I describe above.  Visit WhatsUpGold Network Discovery for more details.

It’s great to have a line that’s far above the rest. It’s great to see that in the Magic quadrant, it’s great to see that in a wave, it’s great to see that in any industry report. But what does it all mean? The technology provider I understand that corporate executives like dashboards, spreadsheets, charts and graphs. These are the tools that many of them used to run their businesses day-to-day. But what does it mean to see a spike in the line; or what does it mean to see a drop in the line? The key to any reporting capability is to have solid analysis and analytics. For instance a marketing executive needs to know why the dramatic spikes in news reference volume from some vendors and not others. That same executive would also want to consider why search trends don’t follow news volume.

read more “Looking Deeper Into The Data: Analysis and Analytics”