You might say that the entire point of a Managed File Transfer (MFT) system is to do exactly that: provide centralized management and control. For example, let’s say that your company is subject to the Payment Card Industry Data Security Standard (PCI DSS). Requirement 4 of PCI DSS is to “encrypt transmission of cardholder data and sensitive information across public networks,” such as the Internet. Let’s also say that you frequently need to transmit cardholder data to partner companies, such as vendors who will be fulfilling requests.

One option is to simply allow someone within your company to email that information, or to have an automated process do so. You’ll need to ensure that everyone remembers to encrypt those emails — you did remember to get digital certificates for everyone, correct? — every single time. If someone forgets, you’ve created the potential for a data breach, and it’s not going to look very good for your company on the evening news.

Another option is to automate the file transfer using an MFT solution. That solution can be centrally configured to always apply PGP‐based encryption to the file, to always require an FTP‐over‐SSL connection with the vendors’ FTP servers, and to always require 256‐bit AES encryption. You don’t have to remember those details beyond the initial configuration — it’s
centrally configured. Even if your users need to manually transfer something ad‐hoc — perhaps an additional emergency order during the Christmas rush — your MFT solution will “know the rules” and act accordingly. Your users’ lives become easier, your data stays protected, and everyone sleeps more soundly at night. This central control is often referred to as policy-based configuration because it’s typically configured in one spot and enforced — not just applied — to your entire MFT infrastructure, regardless of how many physical servers and clients you are running.
What’s the difference between enforced and applied? Making a configuration change is applying it. That doesn’t, of course, stop someone else from coming along behind you and applying a new configuration. The idea with policies is that they’re configured sort of on their own, and that they’re protected by a unique set of permissions that govern who can modify them—they’re not just wide‐open to the day‐to‐day administrators who maintain your servers. In many cases, a review/approve workflow may have to be followed to make a change to a policy. Once set, the policies are continually applied to manageable elements such as MFT client software and MFT servers. A server administrator can’t just re-configure a server, because the policy prevents it. The MFT solution ensures that your entire MFT infrastructure stays properly configured all the time.

– From The Tips and Tricks Guide to Managed File Transfer by Don Jones

To read more, check out the full eBook or stay tuned for more file transfer tips and tricks!

Possibly not. The Internet’s venerable File Transfer Protocol (FTP) is usually supported by Managed File Transfer (MFT) systems, which can typically use FTP as one of the ways in which data is physically moved from place to place. However, MFT essentially wraps a significant management and automation layer around FTP. Consider some of the things an MFT solution might provide above and beyond FTP itself—even if FTP was, in fact, being used for the actual transfer of data:

  • Most MFT solutions will offer a secure, encrypted variant of FTP as well as numerous other more‐secure file transfer options. Remember that FTP by itself doesn’t offer any form of transport level encryption (although you could obviously encrypt the file data itself before sending, and decrypt it upon receipt; doing so involves logistical complications like sharing passwords or certificates).
  • MFT solutions often provide guaranteed delivery, meaning they use file transfer protocols that give the sender a confirmation that the file was, in fact, correctly received by the recipient. This can be important in a number of business situations.
  • MFT solutions can provide automation for transfers, automatically transferring files that are placed into a given folder, transferring files at a certain time of day, and so forth.
  • MFT servers can also provide set‐up and clean‐up automation. For example, successfully‐transferred files might be securely wiped from the MFT server’s storage to help prevent unauthorized disclosure or additional transfers.
  • MFT servers may provide application programming interfaces (APIs) that make file transfer easier to integrate into your internal line‐of‐business applications.
  • MFT solutions commonly provide detailed audit logs of transfer activity, which can be useful for troubleshooting, security, compliance, and many other business purposes.
  • Enterprise‐class MFT solutions may provide options for automated failover and high availability, helping to ensure that your critical file transfers take place even in the event of certain kinds of software or hardware failures.

In short, FTP isn’t a bad file transfer protocol—although it doesn’t offer encryption. MFT isn’t a file transfer protocol at all; it’s a set of management services that wrap around file transfer protocols—like FTP, although that’s not the only choice—to provide better security, manageability, accountability, and automation.

In today’s business, FTP is rarely “enough.” Aside from its general lack of security—which can be partially addressed by using protocols such as SFTP or FTPS instead—FTP simply lacks manageability, integration, and accountability. Many businesses feel that they simply need to “get a file from one place to another,” but in reality they also need to:

  • Make sure the file isn’t disclosed to anyone else
  • Ensure, in a provable way, that the file got to its destination
  • Get the file from, or deliver a file to, other business systems (integration)

In some cases, the business might even need to translate or transform a file before sending it or after receiving it. For example, a file received in XML format may need to be translated to several CSV files before being fed to other business systems or databases—and an MFT solution can provide the functionality needed to make that happen.

Many organizations tend to look at MFT first for its security capabilities, which often revolve around a few basic themes:

  • Protecting data in‐transit (encryption)
  • Ensuring that only authorized individuals can access the MFT system (authorization and authentication)
  • Tracking transfer activity (auditing)
  • Reducing the spread of data (securely wiping temporary files after transfers are complete, and controlling the number of times a file can be transferred)

These are all things that a simple FTP server can’t provide. Having satisfied their security requirements, organizations then begin to take advantage of the manageability capabilities of MFT systems, including centralized control, tracking, automation, and so forth—again, features that an FTP server alone simply can’t give you.

– From The Tips and Tricks Guide to Managed File Transfer by Don Jones

To read more, check out the full eBook or stay tuned for more file transfer tips and tricks!

Definitely not. To begin with, there are numerous kinds of encryption—some of which can actually be broken quite easily. One of the earlier common forms of encryption (around 1996) relied on encryption keys that were 40 bits in length; surprisingly, many technologies and products continue to use this older, weaker form of encryption. Although there are nearly a trillion possible encryption keys using this form of encryption, relatively little computing power is needed to break the encryption—a modern home computer can do so in just a few days, and a powerful supercomputer can do so in a few minutes.

So all encryption is definitely not the same. That said, the field of cryptography has become incredibly complex and technical in the past few years, and it has become very difficult for business people and even information technology professionals to fully understand the various differences. There are different encryption algorithms—DES, AES, and so forth—as well as encryption keys of differing lengths. Rather than try to become a cryptographic expert, your business would do well to look at higher‐level performance standards.

One such standard comes under the US Federal Information Processing Standards. FIPS specifications are managed by the National Institute of Standards and Technology (NIST); FIPS 140‐2 is the standard that specifically applies to data encryption, and it is managed by NIST’s Computer Security Division. In fact, FIPS 140‐2 is accepted by both the US and Canadian governments, and is used by almost all US government agencies, including the National Security Agency (NSA), and by many foreign ones. Although not mandated for private commercial use, the general feeling in the industry is that “if it’s good enough for the paranoid folks at the NSA, it’s good enough for us too.”

FIPS 140‐2 specifies the encryption algorithms and key strengths that a cryptography package must support in order to become certified. The standard also specifies testing criteria, and FIPS 140‐2 certified products are those products that have passed the specified tests. Vendors of cryptography products can submit their products to the FIPS Cryptographic Module Validation Program (CMVP), which validates that the product meets the FIPS specification. The validation program is administered by NIST‐certified independent labs, which not only examine the source code of the product but also its design documents and related materials—before subjecting the product to a battery of confirmation tests.

In fact, there’s another facet—in addition to encryption algorithm and key strength—that further demonstrates how all encryption isn’t the same: back doors. Encryption is implemented by computer programs, and those programs are written by human beings— who sometimes can’t resist including an “Easter egg,” back door, or other surprise in the code. These additions can weaken the strength of security‐related code by making it easier to recover encryption keys, crack encryption, and so forth. Part of the CMVP process is an examination of the program source code to ensure that no such back doors exist in the code—further validating the strength and security of the encryption technology.

So the practical upshot is this: All encryption is not the same, and rather than become an expert on encryption, you should simply look for products that have earned FIPS 140‐2 certification. Doing so ensures that you’re getting the “best of breed” for modern cryptography practices, and that you’re avoiding back doors, Easter eggs, and other unwanted inclusions in the code.

You can go a bit further. Cryptographic modules are certified by FIPS 140‐2, but the encryption algorithms themselves can be certified by FIPS 197 (Advanced Encryption Standard), FIPS 180 (SHA‐1 and HMAC‐SHA‐1 algorithms). By selecting a product that utilizes certified cryptography, you’re assured of getting the most powerful, most secure encryption currently available.

– From The Tips and Tricks Guide to Managed File Transfer by Don Jones

To read more, check out the full eBook or stay tuned for more file transfer tips and tricks!

I spent my morning reading through the 2010 Data Breach Investigations Report that was just published by the Verizon RISK Team and the United States Secret Service.  This is an amazingly insightful report with lots of information to digest.  If the topic of data breaches interests you, I highly recommend finding time to read through it.

Data breaches are scary.   Nobody wants to be a victim… And nobody wants their company to be the next headline on the news.

Data breaches are expensive.  According to the Ponemon Institute’s 2009 Cost of a Data Breach study, the average cost of each compromised record is $204.

Here are 5 quick recommendations that I’d like you to consider:

  • Recognize your data:  Before you can protect confidential, sensitive and important data you must first go through an exercise of identifying where it lives, who has access to it, how it’s handled, what systems it touches, and make sure any and all interactions with the data is fully visible and auditable.
  • Take proactive precautions:  The majority of breaches were deemed “avoidable” if the company had followed some security basics.  Only 4 percent of breaches required difficult and expensive protective measures.  Enforce policies that control access and handling of critical data.
  • Watch for ‘minor’ policy violations:  The study finds a correlation between seemingly minor policy violations and more serious abuse.  This suggests that organizations should investigate all policy violations.  Based on case data, the presence of illegal content on user systems or other inappropriate behavior is a reasonable indicator of a future breach.  Actively searching for such indicators may prove even more effective.
  • Monitor and filter outbound traffic:  At some point during the sequence of events in many breaches, something (data, communications, connections) goes out externally via an organization’s network that, if prevented, could break the chain and stop the breach. By monitoring, understanding and controlling outbound traffic, an organization can greatly increase its chances of mitigating malicious activity.
  • If a breach has been identified, don’t keep it to yourself:  Standard procedure for data breach recovery should be to quickly identify the severity of the breach… And affected individuals have a right to know that sensitive information about them has accidently been compromised.

I’m going to end this blog post by asking you to estimate how many pieces of sensitive files and data your company has…. Now multiply that by $204.  I’m sure you’ll agree that the ROI on the time and resources spent to protect company data are well worth the investment.

That’s right. Get ready to say goodbye to cloud computing.

Not the hosting and using of services over the Internet, oh no. I’m talking about the term “Cloud Computing.”

Well, that’s just one of John Soat’s “Five Predictions Concerning Cloud Computing

What are the five predictions?

  • All applications will move into the cloud.
  • Platform-as-a-service (PaaS) will supplant software-as-a-service (SaaS) as the most important form of cloud computing for small and, especially, mid-size businesses.
  • Private clouds will be the dominant form of cloud computing in large enterprises
  • Hybrid clouds eventually will dominate enterprise IT architectures
  • The term “cloud computing” will drop off the corporate lexicon.

This is a fun and engaging read, and the comments afterward are equally as interesting. Worth checking out.

When interviewing job candidates, I’m always on the lookout for dedicated, motivated, passionate people that relish in rolling up their sleeves and doing whatever it takes to get the job done.  Why?  Because a little bit of chutzpah goes a long way towards being a successful and productive employee.

But can employees “going above and beyond” backfire and result in severe damage to a company?

Unfortunately, yes, they can.

In his guest blog post on LastWatchdog, Gary Shottes, President of Ipswitch File Transfer, describes an example of how hard-working employees are causing new security and legal liability implications that organizations need to carefully consider when deciding what tools to provide people with.

“Highly-motivated workers are willing to do whatever it takes to get the job done, with or without IT.  Employees, whose job requires them to send information to colleagues, partners, vendors or customers around the globe, have literally thousands of file transfer options.

If IT fails to provide employees with a fast and easy way to share information, they will take matters into their own hands, even if that means using technology that’s not sanctioned by IT. They may use a personal webmail account, smartphones, USB drive, or even transfer data via Facebook and LinkedIn.”

Combining that increasingly familiar scenario with some recent survey data indicating that over 80% of IT executives lack visibility into files moving both internally and externally drives home the scary point that there’s a big security hole in many companies…. And organizations need to be careful that employees can’t crawl through it, even if it’s with the best of intentions.

Fortunately, there are some great tools out there to arm employees with a quick, easy-to-use and secure way to share information with other people, both inside and outside the company — While at the same time provide the company with the critical visibility, management and enforcement it needs to protect sensitive and confidential information.  This is one situation where it makes a lot of sense to lead the horse to water & make it drink.

Industry expert Michael Osterman shares some great editorial and perspective in Messaging News on the Ipswitch acquisition of MessageWay.  He starts by pointing out that Ipswitch is positioned as a “Leader” in the latest Gartner Magic Quadrant for Managed File Transfer….. As well as Ipswitch’s proven track record in the file transfer space (Nearly 20-years for those counting).

He also nailed what the acquisition immediately brings to the table as far as expanding Ipswitch’s range of solution offerings:  “(Ipswitch has) clearly boosted its position in the MFT space with this acquisition given that MessageWay’s MFT solutions are designed for high volume file transfer applications in the large enterprise (Global 2000) and service provider markets.”

I particularly like (and agree with) his answer to the question of “Why is MFT important?”

“Among the many reasons are two key ones:

read more “Why is MFT important?”

It’s been a while since I’ve seen “file transfer” as a headlining productivity problem for end users, but here it is making an appearance in an article about how hard it is to use the iPad in the context of an average end user’s collection of gear.

Prisoner of iTunes – the iPad file transfer horror
“The conflict between consumption and productivity”
http://www.theregister.co.uk/2010/06/07/ipad_file_transfer/

Tax season is behind us (at least for most of us) and we can all give a sigh of relief… but can we? This year, getting my taxes organized and handing them to my accountant seemed to be more difficult than usual. Fortunately for me, the Federal Government gave certain areas that were dealing with flooding a small extension that allowed me to find the time to pass my taxes into my accountant.

Once that task was completed, I was able to relax except for the fact I now had one day to get back into the accountant’s office and sign the documents for them to send to the IRS.

read more “Do People Realize What They Are Sending and the Risks Associated?”

The growth and evolution of the managed file transfer industry continues to be a blessing for Ipswitch and our partners.

The acquisition of Sterling Commerce by IBM (article) presents an opportunity for both companies’ customers and prospects to reexamine their challenges around advanced file services. Proprietary technologies and protocols such as Connect: Direct and Network Data Mover (NDM) are inefficient, expensive and difficult to manage. Yet many companies continue to pay excessive licensing and maintenance fees because the cost and effort to replace these technologies have, until now, seemed as expensive.  Furthermore, some partners and ecosystems insist on the usage of legacy file transfer technology because alternatives did not seem to be available.

read more “Ipswitch Steps Up To Replace Legacy Technology After Sterling Acquired by IBM”

PCI audit regulations around scope continue to drive the need for people to segment their networks, applications and often, their equipment.   At Ipswitch, we often see new enterprise customers fed up with their monolithic legacy systems coming to us with a “tactical” need to segment.

Typically, these customers leave a large number of existing transfers on their legacy system to begin with (a 10-1 ratio of legacy to new connections is not uncommon), and some try-and-buy a MOVEit system at that point.  However, others get hooked on the idea of a more flexible, more relevant system and end up with a strategy to migrate all partner connections to MOVEit over a period of 12-18 months, even as they use MOVEit to completely address their short-term PCI segmentation needs.

In the automated file transfer world there are two general user experiences.

Workflow #1: Inbox/Outbox – When an end user (or application) signs on, it sees either one or two folders: an “inbox” where it can drop files and an “outbox” where he/she/it can pick them up.  Frequently when items are placed into the inbox they disappear into an internal system almost immediately.  Frequently when items are downloaded from the outbox they also disappear immediately.

A common variation on this is the combined inbox/outbox where any items visible to the end user are “outbox” items and end users simply upload new items, which do disappear immediately, to the same folder.

read more ““Inbox/Outbox” vs. Folders When Designing File Transfer Workflows”