Imagine fielding a barrage of support calls from upset college students. Up in arms as they can’t access a wireless network. Not long ago, that was the reality for a large university. The help desk was overwhelmed as they tried to manage a network of 2,500 access points. On top of BYOD chaos and students’ big appetites for downloads.  130906_CHECK_CollegeStudentUpset_jpg_CROP_rectangle3-large

The IT team grew tired of running around campus to check on wireless network equipment. They decided they needed a better network monitoring solution. After exploring their options, they selected WhatsUp Gold to keep their wireless network in check.  

WhatsUp Gold not only consolidated wireless network monitoring (wired too), it also offered a single dashboard where network administrators can: 

  • Rapidly respond with the help of real-time alerting
  • Know connected users and devices for each access point
  • See individual user and device bandwidth usage
  • View signal strength and hardware health

 The university can now respond to problems quickly and often remotely. Before students organize a protest.


The article “Why Do Big IT Projects Fail So Often?” by Jim Ditmore on the Information Week Global CIO website is a commentary on the Affordable Care Act’s website implementation.

It offers some good points for any project. “Enterprises of all types can track large IT project failures to several key reasons:

  • Poor or ambiguous sponsorship
  • Confusing or changing requirements
  • Inadequate skills or resources
  • Poor design or inappropriate use of new technology”

Take these factors and add the politics and cumbersomeness of the federal government and it’s not difficult to see the challenges.






Halloween represents the time of year that we embrace ghouls and ghosts, celebrate the macabre, and eat too much candy. This coming Thursday I’ll be greeted at my front door by trick or treat’ers, lined up for their packaged sugar rushes. In between trips to the check out the little ghosts and ghouls, I’ll be watching one of my favorite horror movies. For me, being scared is part of the fun.

Click for full image.
Click for full image.

However, for sysadmins and network managers, Halloween plays itself out every day of the year. So what better time to visit the issues that turn your server rooms into your own personal house of horrors?

We know no two networks are exactly alike, so we focused on 13 network nightmares that represent the common hauntings of every server room. The number alone signifies something to be wary of. Some buildings don’t have a 13th floor. Any Friday that falls on the 13th day of the month can give even non-believers a moment of pause. These 13 network nightmares highlight the type of problems that keep many IT folks awake at night, while fearing the unspeakable network terrors that may face them at work the next day.

Even though Halloween may be a lot of fun, mention any of these 13 nightmares to a network manager and you are likely to see a look of true horror.

Here’s the fleshed-out list of network nightmares, and some tips on how they can get solved in the real world. Have an evil glance, if you like:

1.  The Zombies: Only Zombies should be slow, not your network.  Slowdowns can make it nearly impossible to keep systems and applications up and running at peak levels. With better insight, you can move fast to solve problems before they start to negatively impact business operations and users.

2.  The Vampires: Don’t let network vampires suck the life out of your wireless network. These creatures can take a bite out of network performance with the use of satellite radio and streaming video. Once you track them down, put your stake in the ground and kindly share IT policy so they can listen to Pandora back at the crypt, and not in the office.

3.  The Skeletons: Dealing with bare bones budgets is a constant problem for IT professionals, who are expected to provide higher levels of service to users, with fewer dollars. IT folks should be able to face the skeletons in their closets and monitor their networks, applications and systems affordably.

4.  The Frankensteins: A whole bunch of disjointed pieces and parts can yield monster network monitoring problems. Network administrators should not have to play the mad scientist. Trying to make the nuts and bolts and random wires of their network play nice together shouldn’t look like a scientific experiment gone wrong.

5.  The Spiked Maces: Spikes in network performance can make anyone nervous. Be prepared for high levels of traffic on days when Apple offers a download your users cannot resist. When you can be proactive, the spikes on the network won’t come swinging at you like a medieval mace.

6.  The Ghosts: What problems are haunting your network? Network administrators can be effective ghost hunters and find the spectres, including slowdowns and frightening downtime.

7.  The Chucky (Knife-Wielding Dolls): What may seem like a small threat can actually instigate big problems. What little monster is wiggling their way down into deep layers of the network to compromise security? Unchecked small problems can quickly turn into a network breach if it takes weeks before you find the culprit, especially if the problem is intermittent. Small problems are not “child’s play.”

8.  The Jasons (Scary Intruders): Don’t let software and applications lurk in the shadows. Network administrators need to know which users have downloaded unauthorized applications onto their networked laptops. Turn on the light so you don’t get lost in shadow IT.

9.  The Mummies: Are you continuously wrapped up in the same problems that keep returning? Finding the source of an issue shouldn’t be as hard as digging into an ancient Egyptian crypt.

10.  The Devils: No cost shortcuts like free open source products can tempt you with big promises, but can steal your soul if you depend on them to monitor your network. Listen to the haloed, winged creature on your other shoulder and invest in an affordable solution that gets the little devils out of the network.

11.  The Gravestones: Downtime? More like Rest-in-Panic. Finding the source of a problem on the network shouldn’t bury you  six feet deep.

12.  The Fog: When the fog sets in and bats come out to play, viewing the network can become eerie and impeded. If network administrators can’t get a complete view of their network, they won’t be able to clearly see through the fog and find the source of a slowdown or stoppage.

13.  The Werewolves: Don’t get bitten by the unexpected. Having the proper policies in place can be the silver bullet for dealing with bandwidth-hungry users.










Moving Files to Get Work DoneIn an earlier post, Managed File Transfer (MFT) is about PEOPLE Getting Work Done, I made the off-handed comment that our customers “do not move files for fun”, and what I hope to do in this post is expand on that idea a little bit. What I was trying to get across with my glib comment is the idea that our customers use Managed File Transfer, (MFT) as a way to move files between themselves and their partners in order to solve larger business challenges.

Moving files from point A to point B is pretty much never an end in itself. Rather, files get moved in the service of larger goals and processes that bind business partners together: Things like order-to-cash, insurance-claims adjudication, or content syndication, to name a few. At some point in each of these business processes, one or more files move between parties. In fact, that file transfer may be the critical bit of connective tissue that ties the parties together. But the story never ends there.

MFT is all about moving files to get work done, and in many cases the MFT system is doing some of that work, whether it be to:

  • Automate repetitive processes
  • Intelligently route content based on surface or deep metadata in the files
  • Process files to prepare them for the next step in a flow
  • Integrate with other systems to bind transfer events directly to the back office

So how does all this play out in the business world? Consider the following examples, all drawn from real customers (whose names have been changed, to protect the innocent):

  • A large healthcare provider does business with a nationwide network of hospitals that deliver employee time card information as scan files, using an automated-delivery client. When the scans arrive, the MFT system responds by checking the files, logging their arrival and other metadata, and then routing the files to a records-management system to be processed.
  • An insurance company receives formal requests for claims information from outside partners, and responds by piecing together content from an internal document-management system. When request files arrive, they are processed and interrogated, and data from the claims is used to retrieve the correct content from their document repository. The result is delivered back to the requester as a package assembled using the APIs of the MFT system.
  • A software services company utilizes secure MFT in order to move large packages of sensitive data related to technical-support cases back and forth between technical support and end customers. The files could be database content (with sensitive patient data or social security numbers) or executables too large for other means of exchange. When files arrive, processing in the MFT solution connects them to support records to maintain the continuity of the support experience and history.

Every one of these examples involves the transfer of files between parties, as well as the handling and processing of these files to achieve some greater end. In the case of the healthcare provider, the files are checked for validity, and then routed directly to a backend system in a straight-through process. In the case of the insurance company, the MFT system and the document repository interact with one another through an additional bit of custom code that utilizes the APIs of each to automate a process. In the case of the software services company, managed file transfer automation is used to intelligently bind delivered contents to records in a support system.

Managed file transfer (MFT) systems can automate the transfer of files between parties, as well as handle the processing of files to achieve business goals.

The patterns in these examples represent the rule with MFT, not the exception. It is never the case that the mere arrival of a file represents the end of the process. Typically, it is the start of a process, or the bit of tissue that links two or more parties in a business process.

That’s why a capable MFT system needs to support options that include:

  • Basic file-handling operations like renaming, padding/trimming, and metadata augmentation
  • Archive/package handling operations like zip/unzip, encrypt/decrypt
  • Integrity checking operations like validation of structured documents, translation or transformation
  • Data clearing operations like Anti-virus and Data-loss-preventions workflows
  • Standards-based integration options that allow for custom interactions between MFT and other systems driven by code or scripts
  • Basic Extract, Transform, and Load (ETL) operations that allow content to be loaded into a data-base for further processing

How are you handling your end-to-end processes today when it comes to file transfer? Are you finding gaps in your processes because your file-transfer solution doesn’t support the range of tasks needed to ensure security and smooth file exchanges?

I just read an article published by CNBC online about the chaos caused by BYOD in the workplace, and completely agree.

“Call it a movement of sorts, but employees are increasingly ditching their company issued computers and smartphones in favor of using their own devices to get work done. One big reason: Their company’s tech is, well, terrible.”

Highway Signpost "BYOD - Bring Your Own Device"BYOD has to be embraced and companies need to find a way to make it work. It’s not reasonable to just say no to users bringing their own devices into the corporate environment from both a productivity and employee happiness perspective. It becomes a matter of tracking wireless users, their devices and usage habits to help you to develop and enforce effective BYOD policies to make sure they don’t impact the network’s performance.

It’s an issue we deal with all the time at Ipswitch and are committed to working with our clients to help find solutions, not just put up roadblocks.


Prompted by the challenges of launching the  Health Insurance Marketplace (aka, via Obamacare), CIO Niel Nickolaisen wrote an article (SearchCIO) that provides helpful pointers on rolling out mission-critical applications:

Does the glitch-ridden Health Insurance Marketplace bring back bad memories?” 

  1. Always plan for more than the most hopeful demand. “I polled the team for their most optimistic number of visitors per hour and per day. I then took the highest number and multiplied it by 2.7.”
  2. Plan on elasticity. “We did not want to build to the peak demand. Instead, we built the entire infrastructure for a more reasonable number, but then planned on high levels of elasticity.”
  3. Pilot, pilot, pilot. “I learned a long time ago that no one knows what they want until they see it. And that once they see it, they will want to change it.”


Everything was changing at once at a Boston corporate headquarters of a blue-chip company, an IT executive told us recently. He had just come on board as the CIO and his first major project would be to create and execute on the plan to move IT to a new location. 

hero3Worried about getting enough Layer 2 detail

As he surveyed the situation, he began to panic. Network mapping was a mess. The diagrams of the network from Visio and inventory spreadsheets were outdated. This left him not knowing what and where all of the pieces of the IT infrastructure lived, and how they were interconnected.  

He reached out to us and asked, “How do I know what’s connected so I can figure out what I’m going to hook up or swap out for new replacements when we move our corporate headquarters?

The Ipswitch sales engineer recommended WhatsConnected network mapping software that automatically discovers, maps, inventories and documents Layer 2/3 network devices, servers, and workstations down to the component level. Not to mention virtual resources, software assets, and port-to-port connectivity.

Armed with the data he’d get from network mapping, he knew he could make upgrade decisions and complete the move on time while minimizing the impact to the business.

While skeptical, the director took the plunge and had his team download the software.

Now he looks like a hero

When we followed up with him shortly after, his skepticism had been reversed. He had generated reports within minutes of installing and configuring the product which showed that he’d be able to use or repurpose about 65% of his IT inventory. That was a lot more than he had anticipated. He could now make the move to the new facility well within budget and look like a hero. The network mapping and discovery process also uncovered a “gift” – a SAN storage device worth about $60,000 that, while connected to the network and powered on, had never been placed in service.

That find alone paid for the director’s investment in WhatsConnected many times over. And it helped turn the director’s first major IT project at the company into a big win.

Viva Health: Healthcare Managed File Transfer
“We needed a managed file transfer solution that would put us in compliance with all the various regulations.”

Depending on the industry, the Managed File Transfer (MFT) experience may vary in terms of the initial decision, the implementation and even the key benefits. I recently sat down with Ragan McBride – a Business Process Automation professional with 13 years of experience – to get some exclusive insights into the MFT process within the healthcare industry. 


Zak: What was your situation like before VIVA Health adopted MFT? What problems was the organization facing?

Ragan: Overall, there was no consistency in terms of the way file transfers were managed – everyone seemed to be doing them in their own way. Even people within the same IT group would schedule things differently and would put files on different servers. It got to be very problematic. We didn’t have control of our files, with so much duplicate data floating out there. And there was no way for us to manage file transfers from a single location.

Q: Your organization opted for an MFT solution as a remedy. How did that decision come about?

Ragan: It was a collaborative effort. We talked to people in numerous departments to find out if there were any major needs that we hadn’t considered. We also spent time reviewing our options with these departments, and talking with the CIO to determine our most pressing needs. But at the end of the day, in a healthcare organization, the IT department is really the one that must identify the needs and make the decision.

Q: And what were those pressing needs?

Ragan: Essentially, it all came down to auditability. We needed a more efficient way to do archiving and be better prepared to answer questions that could come up in audits, without killing ourselves later on. We also needed the ability to transfer files using a tool that centralized the process.

Q: You talked about the needs of other departments, as well as your own. We’re curious to know how many of those needs revolved around legal requirements as opposed to features/capabilities you simply wanted.

Ragan: Satisfying legal requirements was one of the primary reasons for switching to an MFT solution. In the healthcare industry, everything needs to be encrypted for HIPAA. For archival purposes, we have to keep certain claims data for a specified amount of time before cleaning it up. Basically, we can’t have files sitting out there indefinitely – the regulators always take notice. So we needed a healthcare managed file transfer solution that would put us in compliance with all the various regulations.

Q: Aside from addressing legal issues, what were some of the immediate benefits your organization saw by moving an MFT solution?

Ragan: We saw an immediate improvement in terms of workflow and have saved an incredible amount of time by eliminating a lot of repetitive, manual processes for tasks like pulling down files and loading them into SQL. We’ve saved somewhere close to two-and-a-half man years just on file transfers alone, so the ROI has been quantifiable.

Q: Were there any big roadblocks in the way of adopting MFT?

Ragan: Nothing major, but I would caution others to be mindful of how MFT will work with other internal systems. They all need to work together seamlessly, so there will be a period where you have to identify the best ways to ensure this happens. In many cases, different systems produce different outputs. So you need to rename certain files, which can get somewhat complex if you need to start writing custom scripts. The important thing to keep in mind is that the solution to your integration problems is probably embedded in your MFT solution – you just need to spend some time upfront figuring it out. But it’s worth it because MFT helps improve your workflows, saves you time and removes hassle in so many ways.


How has MFT adoption evolved within your industry? Share your thoughts and feedback in the comments section below. Thanks!


When the new IT director for a major transportation company walked through the door on his first day, he knew in advance the big network monitoring headache he faced. He was joining a fast-growing company that supplies cargo containers used by ships, trains and trucks. To keep the containers moving, the 12-person IT team maintains a network of virtual and physical servers & desktops, spanning 12 locations, using more than 90 network devices, with about 150 active monitors and passive (SNMP trap) monitors.

untitleddsdNo room for doom and gloom

An early meeting on the first day with his staff made it clear that the IT team didn’t use or have access to network monitoring tools. That meant they would only hear of a problem causing network downtime when user complaints began pouring in. Staff members would scramble to find a way to handle the problem, but they were doomed to repeat the process when the same problem rose again. Fortunately, he had used Ipswitch WhatsUp Gold network monitoring software at two prior jobs and installed the product and several integrated plug-ins, including:

  • WhatsConfigured – Network configuration and change management
  • WhatsConnected – Layer 2 and Layer 3 discovery and network mapping
  • WhatsVirtual – Monitor physical and virtual servers from a single console
  • Flow Monitor – Network traffic monitoring and analysis

From this install, he created a dynamic topology map that detailed everything that was configured on the network. He also used an out-of-the-box report on the network status to highlight problems like unmanaged switches, network loops, overloaded devices and wiring issues. From that point his staff was able to rip and replace of all the failing elements of the network.

Up and running 99.999% of the time uptime

A year later, network uptime had improved from a base line they never wanted to measure to 99.999%, in part because they were now able to use network monitoring to solve for root causes rather than treating symptoms. This helped insure that problems are now more likely to stay solved. As a result, only two staff members are needed to maintain the network. “Once WhatsUp Gold is set up, it runs itself,” the director explains. This has freed the rest of the staff to carry out proactive work that’s helped raise overall IT service levels to new highs.



Students in 47 schools throughout Los Angeles were recently given iPads as a study tool as part of a $1B rollout to 650,000 students.  It didn’t take long at all for more than 300 students to alter basic security settings and use the iPad for anything they liked. Tweets, Facebook posts and Subway Surfers were soon competing against legit academic activities.

Photo source: LA Times

The security gaffe and resulting embarrassment it caused to the school’s administration could have been avoided with a network data flow monitor in place. The school system’s IT staff could have been able to determine when their configured settings were being changed — and get alerted when that happened. They’d also be able to see which students were using Facebook and other sites based on monitoring the iPad’s data flow.  

Tracking the Bandwidth Hoarders

Network monitoring of data flow would allow school IT administrators to identify “top talkers” and determine which iPads were consuming the most bandwidth as they tracked them to sites like YouTube.  Spikes in activity caused by torrent downloads are another telltale sign that wouldn’t ordinarily occur otherwise.  Network monitoring would also help track and resolve network traffic congestion by classifying the traffic by type and protocol in real time.

We Can See You

With wireless access monitoring, administrators can see who is connected to the network in real time and receive alerts when students connect to sites not approved for use in school. They can even tell from what area of the school students are accessing the network based on what wireless router they’re connecting to.  This would help ascertain whether students were consuming significant amounts of data during class time or not – and if they were surfing appropriate sites, or subways.sada

It’s not all big brother, though. Wireless network monitoring would also help the school locate an iPad if a student misplaced or lost the tablet, and it was still on school grounds.

managed file transfer security
While best practices can improve an organization’s overall security posture, we’ve built software improvements into MOVEit that further increase security

Through the years my role at Ipswitch has changed from someone taking front-line calls, going to customer sites and working with the engineering staff to someone who is responsible for the “health” of the MOVEit product. During this time a lot has changed in the market as well. As an example, in the past ten years I have seen the ability to secure FTP go from a “nice-to-have” to a “must-have”, including transporting files securely along with applying security at rest. These days organizations are a lot more focused on services they sign up for and the security risk they represent. As a result, they ask more detailed questions about managed file transfer security like “What encryption and hashing algorithms are being used?,” and also ask third parties to audit the services for compliance. In my opinion, now more than ever, administrators need products they can trust with sensitive data.

In my opinion security is to MFT what location is to real estate, which is of course to say paramount. As I sat down to write this post, I tried to imagine transferring files without any security or controls. To me that seems absurd because businesses move files to get work done and people lose jobs when the proper security or control is not in place.

The truth is, software needs to do more to protect all the sensitive information that is exchanged. Just as the security triad of confidentiality, integrity and availability has evolved, so must software, along with the way it is built. That was a hard realization when we started working on the MOVEit 8.0 release. We understood that we needed to adapt to the changing landscape and get ahead of our customers’ audit and compliance issues.

With that in mind, I created the following cheat sheet to help those interested in making MFT software (whether MOVEit or another product) more secure.

Based on my experience, here are eight steps administrators should take:

1. Harden the host machine, or run a trusted tool to harden it.

2. Enable the strongest password policy allowed by the organization and expire passwords on a routine basis. If possible, utilize secure, external authentication such as LDAP to centrally manage and control passwords.

3. Set expiration policies and lockout policies on all accounts. Also, enable any system-level whitelist or similar functionality to block password-harvesting scripts.

4. Constrain external traffic to secure ports like TCP/443, TCP/22 and disable non-secure FTP in favor of explicit FTP over SSL/TLS or implicit FTP over SSL/TLS. Minimize the attack surface to only the necessary services and use those services in the most secure way.

5. Use FIPS mode, if possible, and/or disable weak SSH and SSL algorithms. This allows administrators to use only the strongest security.

6. Configure and review built-in security audit reports on a regular basis.

7. Utilize two-factor authentication like SSL certificates if possible for additional security.

8. Enable user sessions to expire after a set amount of inactivity. This prevents anyone from gaining access from an open browser that is unattended.

While the best practices above help improve an organization’s overall security posture, we’ve built software improvements into the latest release of MOVEit that augment these operational changes to further increase security, Specifically, MOVEit 8.0 incorporates the following:

1. OWASP Top Ten – For as long as I can remember, we have focused on standards for MOVEit, like the RFC for securing FTP using TLS.  Enter the OWASP Top Ten, a consensus document of the top web application vulnerabilities to eliminate in software. MOVEit now has all the latest protection against these common issues like cross-site scripting (XSS) and injection attacks and more, which is one tenet of PCI DSS 2.0. In a future post, I’ll elaborate on OWASP.

2. Transport Encryption Algorithm Control – Now MOVEit administrators can enable/disable weak transport encryption algorithms for FTP over SSL and SFTP. These options, coupled with the ability to enable FIPS, allow administrators the control they need for secure file transfers both now and in the future. They can also regulate the system to only use the most secure transmission between users and partners.

3. MOVEit Security Tool – We have improved the MOVEit Security Tool “SecAux” which was initially created to help administrators easily harden their machines without having to run through the registry and local security policy. The tool is run during installation (or can be run manually) and makes it easier for overburdened administrators to apply security policies.

4. Improved Security Process and Tools – A year ago we realized we needed to improve the way we think about and securely develop our software. So we set out to utilize the best tools available, formalize processes and engage a third party to validate our work. It is by no means perfection, but I think MOVEit 8.0 reflects the continued commitment to the best-in-class security MOVEit has been known for over a decade.

All of these security improvements and more are included in MOVEit 8.0 to give businesses and administrators the confidence they need in an enterprise-class managed file transfer solution where security is paramount. There is of course more in MOVEit 8.0 and I encourage those interested to review the release notes as I’ve just given an overview of what’s available.

Lastly, I wouldn’t be true to my Midwest roots unless I thanked you for taking the time to read my post. I welcome your comments and plan to write again soon, so please check back.

Staff at a call center services company we know spends all day on the phone. They depend on the performance of both the phone system and the applications they use and they were forced to live with substandard voice quality and application latency problems. Michael, a network admin at the company, knew how frustrating this was for them and called Ipswitch for advice. After discussing the issues, Michael was advised to install WhatsUp Gold VoIP Monitor and took advantage of the VoIP config utility to configure IP SLAs for every WAN circuit in an effort to lift performance levels.

Acan-you-hear-me-nowccording to Michael, “As soon as we had everything built, one site went into alarm for voice quality issues. We found a 14% packet loss outbound from the site on one MPLS circuit. Then we built a second SLA for the non-voice site that would be treated and verified the packet loss was not just on voice traffic.”

Not only had Michael found the cause of the voice quality issues, but in the process he had figured out why the applications ran slowly. We asked Michael how long it would have taken him to solve the problem without a VoIP network monitor. He told us:

We may not have even known there was a problem for a long time. Once the product was running, it only took a few minutes to find the problem. It took a couple weeks before the provider would do the proper testing onsite to narrow down the issue, but once they did, they found a bad interface on the Ethernet demark for our circuit. Since resolving the issue, all applications have run faster and voice quality has improved considerably. Agents and customers can hear each other better. With applications no longer suffering from latency, call times have been reduced, and this means agents can make more calls. This affects the bottom line more than anything.