Ipswitch Blog

Where IT Pros Go to Grow

Our Latest Posts

skeleton with christmas decorationToday we announced the findings of our third annual ‘Happy Holidays?’ survey that revealed the nightmares IT pros can expect during the holiday season. Our survey was conducted across the US and the UK, polling a total of 543 IT pros.

Employees working remotely or being careless during holiday celebrations will create a nightmare before (and during) the Christmas season.

What we found interesting about the survey data:

Coping with the nightmare before Christmas

Rather than celebrating the holidays with friends and family uninterrupted, IT pros will be dealing with the network nightmares that arise. The survey polling 165 British IT pros revealed that over a quarter (27 percent) can expect to be either on-call or working on Christmas Eve, with 10 percent on Christmas day. 13 percent of IT professionals in the UK also expect to be tied up with work matters on New Year’s Eve.

IT Pros and Employees: Home for the holidays

IT teams can expect an increased demand for remote management capabilities and 24/7 access as employees will be working from home, traveling or on vacation. When asked what percentage of their workforce will be working remotely over the holidays, 47 percent of IT professionals in the U.S. and 51 percent of the Brits said up to 25 percent. Another 29 percent in the US said up to 50% of their workforce, as compared with 26 percent of the British IT pros.

Holiday horrors continue

When asked what the most common IT problem employees face when the office is closed for the holidays, the top two issues for IT pros in the UK and the US were laptop problems (39 percent) and the inability to access the network (36 percent). 28 percent of IT professionals in the UK indicated poor application performance was also a common problem, followed by the 21 percent that reported security-related issues (e.g. malware on laptops). To add to mounting pressures, 41 percent of IT professionals in the UK have experienced a major network outage during a company holiday while 38 percent of IT professionals in the US have experienced the same.

Celebrations gone wrong

While employees are spreading holiday cheer, IT pros are left tackling the consequences that can result from company holiday celebrations. Over half of IT professionals (57 percent) in the UK reported that they’re worried that their network could suffer a data breach at the hands of a careless celebration. In addition, 36 percent of IT professionals in the UK confirmed they have had an IT user report the loss of a device holding company data following holiday celebrations in a pub, restaurant, or at a party.

What to expect in 2016. When asked what IT believes to be the “must-have” gadget in 2016, 34 percent of IT professionals in the UK said wearable technology, whereas 33 percent of IT professionals in the US said smart phones. The survey also uncovered that the top resolution in 2016 for IT professionals in both the US and UK was increased level of network security, with about 50% reporting in.

For the full findings from both the US and UK, download the respective infographics and data below.

US Version

UK Version

Nightmare Before Christmas Infographic (UK):

Ipswitch UK Holiday Survey_2015_Infographic_final

Related articles:

How the Network Stole Christmas

How the Grinch Stole Wi-Fi

Ever get an alarm storm striking your network and distracting you to no end? Over Christmas or not, discover why dependency mapping and monitoring will improve network visibility and control.

alarm storms

A few weeks from now a good number of people will try to stick to their New Year’s resolution to shed some weight gained over the holidays. In parallel, waistlines may not be the only thing slimming down. Your data storage spend may as well.

As cloud providers “race to zero” and alternatives such as SSD gain traction, the price of data storage is dropping. Yet many companies still find IT costs climb as they’re pressured to store more information — the big data market is on track for 23 percent CAGR through 2019, according to Research and Markets — while ensuring other departments have immediate access to that data whenever, wherever.

The result? Increased C-suite expectations paired with budgets that don’t match up. Here are 10 tips for controlling storage costs without sacrificing access or performance.

1) Create in the Cloud

Controlling IT costs starts with an evaluation of existing processes: Which ones need to stay on in-house servers and which can be moved to a public or hybrid cloud? One great candidate for the cloud is application development, since the storage and server resources required to dev/test in-house not only reduce network performance as a whole, but result in significant costs if testing doesn’t go as planned. Rather than building (and paying for) an internal test environment, consider building apps in the cloud and then moving them back to local stacks once they’re ready for deployment.

2) Match Management

As noted by TechTarget, it’s often possible to reduce IT spend by migrating licensed applications to newer and more efficient servers. If storage appliances aren’t upgraded at the same time, however, the result can be a management mismatch: Servers can handle the CPU demands of cutting-edge apps, but storage solutions can’t provide data fast enough. Bottom line? Matching storage and server management is essential to level out your costs.

3) Send Off Old Storage

It many seem counterintuitive to purchase new storage solutions when existing decks are still up and running, but in some cases you’ll save more by spending now than trying to squeeze every last cycle out of legacy hardware. Newer models typically offer more space combined with lower operating costs, but this transfer method only works if your data is new enough to make the transition. If file types and storage architectures are incompatible with newer hardware, this is another opportunity to leverage the cloud using an integrated storage appliance.

4) Don’t Get Sentimental

Not all of your apps are getting used, and it’s time to let them go. Some simply don’t perform as intended and others have been replaced by newer, better versions. As a result, it’s worth doing an “app purge” every six months or so. Take a hard look at the software stored on your system and track down any obsolete or seldom-used apps. Make sure they’re not tied to critical functions and then “retire” them using long-term, low-cost storage.

5) Consider Colo

CBRE Group estimates the average 5-megawatt data center costs $270.1 million to operate over 10 years — a big chunk of change for any enterprise, let alone a small or midsize business. Part of that cost comes from building and server maintenance, while rising power prices also have an impact on storage viability. Although it is possible to reduce this cost using tax breaks and careful planning, another option is colocation. You bring the storage hardware but don’t have to pay for facility management or power. In effect, the physical costs are handled without your supervision, freeing you up to focus on streamlining storage itself.

6) Gone in a Flash

Flash and SSD are popular buzzwords, and that’s no surprise when they perform better than traditional hard drives and are less likely to break. According to Tech Times, however, the cost of SSDs still puts a full switchover out of reach for many companies. And yet it is cost-effective to start trending this direction, especially for critical or high-demand apps. Spending a little on SSD or flash can have big returns and improve the long-term prospects of your storage environment.

7) Live and on Tape

A few years ago, tech pundits predicted the death of magnetic tape; surely with advanced storage arrays, public clouds and flash devices, any available tape would simply disappear. Enterprise Storage Forum suggests otherwise; demand for tape is higher than ever. Why? Because it offers long-term, high-volume and low-cost storage for data that your company doesn’t need right now but may need five or 10 years down the road.

8) Opt for Open Source

Want to control IT costs for storage? Consider open source. A number of high-profile, well-supported projects — OpenStack, for instance — provide open-source solutions to help improve your storage environment without forcing you to pay licensing costs. Better still, you can customize this code to your liking, rather than getting pigeonholed by providers.

9) Outsource Recovery

Disaster-recovery solutions are one of the biggest money sinks in any organization. They’re necessary, of course, but that doesn’t make them cheap. By opting for DR-as-a-Service (DRaaS), you can leverage economies of scale to bring down costs and free up local storage for mission-critical apps and data analytics.

10) Circular Backup

One last tip for controlling storage and IT budgets: Make a local backup of your offsite backup. Sounds backwards, but by keeping a copy onsite, you’ll be able to more quickly recover after a disaster so you’re not left high and dry if your DR provider experiences an outage. And by narrowing your focus to the most recent iteration of your backup, you can minimize its footprint while protecting your interests.

Full access, high-performance storage is essential. And expensive. Consider these 10 tips to help lower IT costs without sacrificing performance.

Application monitoring can help troubleshoot bandwidth bandits and other disruptions (credit: Jerry John | Flickr)

Cloud computing is a ready-made revolution for SMBs. Forget about server downtime; elastic computing and API-driven development is perfect for smaller organizations with project funding in the mere thousands of dollars.

All that agility is allowing information architects to think big — smartphone connectivity, IoT, lambda architecture — with existing app performance monitoring standards becoming more Web and socially aware.

Perfect world, right? Well, maybe a “perfectable” world. While developers are doing the elastic, agile thing — leveraging the power of pre-built tools through IFTTT or Zapier and getting Big Data tools from GitHub — they’re making assumptions about available bandwidth. They may even add Twilio to the mix so the company can SMS you in the middle of the night when their app hangs.

App Performance: ‘It’s Spinning and Spinning’

“I can’t do anything. It’s just keeps spinning,” you’re thinking. Classic Ajax loader. Users from a different era prefer freezing metaphors, but those are just as obvious, and don’t encompass today’s issues: “My email won’t send,” “My daily sales dashboard won’t load” and, now, “the whole neighborhood’s smart meters are offline.”

A new set of network demands are rounding the corner, foreshadowing a greater need for application performance monitoring: SIEM, Big Data, IoT, compliance and consumer privacy audits. It is the slow death of offline archiving. And for each, file sizes are on the rise and apps are increasingly server-enabled — often with heavy WAN demands.

Open Source, DIY and Buy-a-Bigger-Toolbox

Presented with bandwidth concerns, some support specialists (or DIY-minded developers, as that is often the SMB way) will turn to open-source tools like Cacti to see what they can learn. And they may learn a lot, but often the problem lies deeper inside an app’s environment. As one support specialist explained (known as “crankysysadmin” on Reddit), “It isn’t that easy. There are so many factors that affect performance. It gets even more tricky in a virtualized environment with shared storage and multiple operating systems and complex networking.”

Another admin in the Reddit thread agreed: In terms of app performance monitoring, he responded, “there’s no one-size-fits-all answer. What type of application are we talking? Database? SAP? Exchange? XenApp? Is it a specific workflow that is ‘slow’? What do you consider ‘fast’ for that same workflow?”

Event-Driven Heads-Up for App Hangs and Headaches

App usage spikes have many possible causes, which is precisely why a commercial app monitoring tool that is easy to use when you need it in a pinch can ultimately pay for itself. Depending on site-dependent update policies, types of applications support, regulatory environment, SLAs and cloud vendor resources, you’ll sooner or later be faced with:

  • Massive updates pushed or pulled unexpectedly.
  • Surprise bandwidth-sucking desktop apps.
  • Developer runaway apps.
  • App developer design patterns tilted toward real-time event processing.
  • Movement toward the more elastic management of in-house resources.
  • Management of bandwidth usage by cloud service providers.
  • A need to integrate configuration management with monitoring.
  • Increased support of operational intelligence, allowing for real-time event monitoring as described by Information Age.
  • Monitoring to develop application-dependent situation awareness.

The last of these, situation awareness, deserves an emphasis. Consider the impact of moving monthly reports to hourly, or a BI dashboard suddenly rolled out to distributor-reps. Situational awareness at the app level can ward off resource spikes and sags or even server downtime.

Identify What’s Mission-Critical

Whether the monitoring answer is open source or commercial depends partly on whether your apps are considered mission-critical. For some, VoIP and Exchange have been those applications. The SLA expectation for telephony, for example, is driven by the high reliability for legacy on-premises phone systems that rarely failed. SLAs for VoIP are often held to the same standard.

And what’s mission-critical is probably critical for job security. If the CEO relies on a deck hosted in Sharepoint for briefing at a major conference, and he can’t connect at the right moment — well, you may wish you had a bigger IT staff to hide behind.


Related articles:

Are Your Mission-Critical Applications Starving for Bandwidth?

Noble Truth #5: Network and Application Performance Defines Your Reputation

Ask 10 network professionals about infrastructure security and you’ll get almost as many opinions ranging from “you don’t need more than a firewall and a good set of access rules” to “invest in a variety of included and separate network security tools” and everything in between. However, the truth usually lies in the middle.

Admittedly, you don’t always need to buy a shelf full of software to realize good infrastructure security on a budget. “All you really want is a good firewall and good security permission within the network,” says Ryan Jones, an independent network security consultant. “Use a limited-access principle and give everyone the minimum required access and escalate the permission upward only when required.”

This approach will work for some, but others — especially those involved in banking or e-commerce — will need at least another layer. “Using metrics management and monitoring [for] the network and data is complex, but basically, apply some methodologies and use the software of your choice to manage security,” recommends Rodrigo Arruda, an IT specialist for Itaú, an international financial institution headquartered in Sao Paulo, Brazil. “It does often involve some cost, though.”

Stay Up to Date

You don’t have to spend your department’s whole budget on just a few things. In fact, Peoria Magazines says much of what you can do to secure your network without breaking the bank is free or close to it. Keeping your software up to date between major revisions is usually free and will plug up holes you might discover at an inconvenient time.

Stay Fired Up

You should also be using a sturdy firewall product and configuring it per the nature and sensitivity of your data. Don’t set it to auto-learn, which can be just as bad as auto-correct on a smartphone. Manage the rules so it knows which programs have what level of access, and be sure to specify the ports that will be used. Keep in mind firewalls should be supplements to more comprehensive authentication and threat-detection protocol.

Deny the SPAM

Although Kaspersky and similar cloud-based security services integrate pretty well with professional email platforms, your team should be willing to invest about $1,500 in a decent spam-filtering appliance, as phishing is often how network intrusions are initiated with unsuspecting staff (you’ve trained them on phishing content, right?).

Lock It Up Properly

Another way to ensure infrastructure security on a budget is to limit user access. This means John in Accounting and Mary in Sales shouldn’t be installing new software on a regular basis. In fact, these users should only need to install new software once or maybe twice a year. Only administrators, and select department heads, should be given administrative access to the network. Everyone else should be given the most basic rights they need to do their jobs efficiently and securely.

Use Deception to Foil Intruders

Sun Tzu, in his famous tome, said: “All warfare is based upon deception.” A minor modification and it resonates with IT personnel: “All ‘warefare’ is based upon deception.” In other words, use software to deceive intruders. Products like Sourceforge’s Active Defense Harbinger Distribution (ADHD) can detect a malicious network entry and block all outgoing traffic to that IP. To the intruder, your network just went dark.

Use a VPN for Remote-Access Users

Once upon a time, you could give your remote users a phone number, have them dial into your network and use something akin to a secure net key to give them remote access. The encryption that a virtual private network (VPN) uses is typically unbreakable, and even if it is breached, it will have taken so long to do so that the connection itself drops by the time that key is broken. OpenVPN is a solid open-source project and free through its community version.

Keeping your network secure with limited funds isn’t impossible, but it may seem like an insurmountable task at times. With proper planning, however, it doesn’t have to. Whether it’s free or very inexpensive, spam filters are your biggest commitment. Most of the suggestions above will only cost you and your team some necessary time.

snmp blog 3

It doesn’t take a ninja to know that Simple Network Management Protocol allows administrators to monitor network-attached devices. With that noted, you might actually need to be a ninja to enable and configure SNMP on Windows, Linux/Unix, Cisco, and ESXi.

Have no fear. Here’s a step by step guide on how to enable and configure SNMP on Ipswitch WhatsUp Gold infrastructure monitoring software so you can administer with ease.


The first step is adding the feature (Server 2008 and above) or “Add/Remove Windows Components” (Server 2003 or below). Once the feature/component is added, open your services.msc. [Start > Run > services.msc], find the SNMP service and double-click it.

There are two important areas in the SNMP service configuration. The “Traps” tab determines where SNMP traps from the Windows host will be sent and which community name those traps will use. The “Security” tab allows you to setup your read/write community names and grant access to the WhatsUp Gold server. Once you apply your settings, restart the SNMP service for those settings to take effect. Then, you’re done.

Some interesting things I’ve stumbled upon:


On Linux/Unix, you will need to configure snmpd.conf. You can read more about it at SNMP CONFIG and SNMPD.CONF. Below is a basic sample configuration — although you can get much more complex and do a lot more with it. Once you update your /etc/snmp/snmpd.conf properly, restart snmpd:

snmp blog 1


Configuration of SNMP on Cisco devices will vary slightly depending on the type, but in general they are nearly identical.

Here are some links to helpful Cisco documents:


Depending on your version of ESXi, the setup steps will change. For the purpose of sanity, I have included only ESXi 5.0, 5.1+. Prior to 5.0, the steps were significantly different.

ESXi 5.0: VMware documentation

ESXi 5.1+: VMware documentation

The commands below will setup SNMP and allow it through the firewall. If you prefer, you can setup the firewall rules using the vSphere Client GUI under Configuration > Security Profile. Replace “YOUR_STRING” with your community string:

snmp blog 2


That’s our lesson for today. Use your knowledge wisely.

Learn why SNMP is the most versatile and comprehensive protocol in your toolkit >> Read More


sox-complianceRemember the corporate accounting scandals that took out Enron, Arthur Andersen and WorldCom? They all ended with prison sentences, layoffs, and billions of investor dollars lost forever.

The Sarbanes-Oxley Act of 2002 (SOX) is meant to prevent scandals like these from happening again. How? By establishing strong and transparent internal control over financial reporting (ICFR). All publicly held American companies and overseas companies that have registered securities with the Securities and Exchange Commission (SEC) must demonstrate SOX compliance. Same goes for any company providing financial services to any of these firms. According to CFO.com more than half of the larger companies registered with the SEC will pay $1 million or more to achieve SOX compliance.

What part of this is relevant to you as an IT pro? In 2007, the SEC issued SOX compliance guidance clarifying the IT team’s responsibilities: to identify the company’s biggest priorities when reporting financial risk, sometimes with help from auditors. Your role, then, is to support the processes that minimize all identified risks. The most pertinent sections of SOX for IT teams are 302, 404, 409 and 802. Here they are — or, rather, here’s what they mean.

Section 302: Keep Execs in the Loop

SOX requires the CEO and CFO to vouch for the accuracy of a company’s financial statements. They need to attest that they’ve evaluated ICFR within 90 days of certifying the financial results.

The IT team’s role is to deliver real-time reporting on their internal controls as they apply to SOX compliance. This requires automating tasks like testing, evidence-gathering and reporting on remediation efforts. Reporting should be delivered in both auditor- and executive-friendly language.

Section 404: Establish Controls to Support Accurate Financial Reporting

According to SOX, all businesses should have internal controls in place for accurate and transparent financial reporting. An outside auditor should review these controls every year, assessing how well businesses document, test and maintain those controls.

The IT team’s role here is to identify key IT systems and processes involved in initiating, authorizing, processing and summarizing financial information. This material usually involves security, application testing, the verification of software integrations, and automated process testing. The goal is to ensure all procedures support the accurate and complete transmission of financial data while keeping asset-bearing accounts secure from unauthorized access.

Section 409: Deliver Timely Disclosure

Certain events — like mergers and acquisitions, bankruptcy, the dissolution of a major supplier or a crippling data breach — can significantly shift a company’s fiscal prospects. SOX compliance mandates the timely disclosure of any information that could affect a company’s financial performance.

The IT team’s role is to support alert mechanisms that could trigger this timely disclosure requirement, as well as mechanisms for quickly informing shareholders and regulators.

Section 802: Ensure Records Retention

Today’s SMBs keep both paper and electronic copies of sensitive records when bookkeeping. Spreadsheets on an end user’s computer, email messages, IMs, recorded calls discussing money, financial transactions — all of these have to be preserved and made available to auditors for at least five years.

The IT team’s role is to preserve these records with automated backup processes and ensure the proper function of document management systems (which may or may not include an archive of email and related unified-communications content). IT pros also have to maintain the availability of these records as it migrates to new technologies, such as from old tape-based systems to cloud backup.

Making Audits Go Smoothly

The Unified Compliance Framework (UCF) aggregates requirements from big regulations like SOX, HIPAA and PCI DSS, along with requirements from federal and state laws. With UCF, the IT team can adopt a set of controls to satisfy multiple regulations.

Network Frontiers, which manages UCF, keeps it up to date, which is a huge timesaver for your team. Ron Markham, co-founder of Intreis and former CIO for IBM’s Software Group-Business Analytics, used UCF to cut IBM’s audit time to two weeks and reduce audit-related costs by 80 percent.

In addition to what Markham calls his “test once, comply many” approach, Markham recommends a unifying platform that automates workflows. The solution should integrate a configuration management database (CMDB) and serve as IT’s system of record.

Documenting processes and packaging them in a way that’s easy to audit, both for management and outside auditors, prevents frantic pre-audit scrambling. It also saves those most precious of resources: time and money.


cisa-certificationA certified information systems auditor (CISA) carries a specialty certification that indicates a mastery of IT security in the realms of governance, risk and compliance. And although it’s not required, CISA certification is a big boost for the IT department in some surprising ways.

Not super familiar with it? Here’s an overview of what CISA is and why you ultimately need to know about it.

IT Security = Job Security

Improving security has become an essential function of the IT department, especially with BYOD a reality and new vulnerabilities getting discovered every day. It sounds demanding, but an IT pro who has this certification is uniquely equipped to see where security weaknesses are and rectify them swiftly using the most efficient techniques available.

Do You Qualify?

To qualify for CISA certification, candidates require a minimum of five years of professional experience in the field of information systems auditing, control, assurance or security and, additionally, pass a one-time CISA exam administered by the Information Systems Audit and Control Association (ISACA). ISACA is also responsible for awarding the certification itself.

Dust Off Your SAT PTSD

The exam is designed to be difficult, with no clear order to any one section of the 200 multiple-choice questions administered over a four-hour period. ISACA doesn’t publish pass/fail rates, although the information gathered by the University of Virginia suggests only 50 percent of candidates pass (don’t get discouraged; more than 50,000 have succeeded worldwide). Keep in mind certification is awarded upon completion of the exam, but to maintain certification, IT pros must consistently adhere to the ISACA Code of Professional Ethics and comply with the organization’s continuing professional education policy.

You can always go to ISACA’s website to take a CISA practice exam. This is a great way to self-assess.

What the Certification Gets You

CISA certification is not for the faint of heart, but the hard work that goes into gaining this certification is well worth the credentials you receive. CISA is ideal for any professional working in the IT field, but it is crucial for those who are looking to demonstrate a mastery of IT security audits and manage control operations. CISA certification also provides an avenue for IT pros to stay abreast of updates and changes in technology that would keep their IT department ahead of the curve. Because it’s constantly updated to reflect new network challenges, the continuing education required by the program is a great way to stay on top of ever changing IT trends.

Even though some IT pros would rather have a root canal procedure over a compliance audit, these regular checks are necessary for midsize businesses to ensure each important standard is upheld. And it isn’t a short list to cover, with the most common suspects including PCI-DSS, HIPAA, FISMA, GLBA, SOX, and ISO 27001, among others.

One of the easiest ways to streamline the compliance audit process is to implement a managed file transfer system that includes specific visibility and control features.

Skeptical? Here are three ways managed file transfer makes getting audited a little easier:

1. Dude, Where’s My File?

The biggest benefit of file transfer visibility is support’s freedom to search for a specific file and see exactly where it originated, where it ended up and how it got there. Whether you’re looking for a single document shared between a handful of employees or an app that was deployed to dozens of workers across the office, increased visibility can help identify all data movement across a network no matter how large or small. This is massive for your department since auditors look to track down potentially troublesome files or test the robustness of certain security parameters.

2. Make Sure You Review Activity Logs

Simply knowing where files are isn’t enough to make a tangible difference during the compliance auditing process. Auditors who are looking to investigate specific usage metrics can benefit greatly from looking at activity logs that include detailed information about time of transfer, recipient information and changes in status. Logs that are this extensive — especially for HIPAA compliance, according to SecurityMetrics — will help keep the IT department from tracking down data points manually, presenting auditors with an easy way to analyze current trends in the IT department (and of course make more precise suggestions for improvement).

3. Reporting in for Duty

If there’s one word that truly instills fear and dread in all of us, it’s “report.” Although there are plenty of applications that can help you find and analyze large amounts of information, the manual information-gathering process for file transfer data can be cumbersome and put serious constraints on an already time- and bandwidth-strapped department. Fortunately, managed file transfer programs with enhanced visibility allow you to compile data transfer reports with custom parameters in just minutes, saving them up to several hours of painstaking searches and extensive data analysis.

These benefits will help the auditing process, but they also have the added benefit of assisting the your team on a day-to-day basis. Being able to track files across a network, get status updates in real time and receive notifications about unusual activity will help your IT team stay on top of even recurring file transfers and keep operations running as smoothly as possible. One of the easiest ways to streamline the compliance audit procedure is to implement a managed file transfer, and eliminate — or at the very least, lessen — the dread within the department that often accompanies the process.



CaptureddIn July this year, a computer fault forced United Airlines to ground its flights in the US for the second time in a matter of weeks. The problem, it turned out, was a ‘network connectivity’ issue caused by a computer router malfunction.

The impact of the glitch proved significant for United, both from an operational and brand reputation perspective. The two-hour long issue caused delays to more than 90 aircraft, resulting in the company once again hitting the headlines for all the wrong reasons.

Following hot on the heels of an earlier scenario in June which resulted in the airline enforcing a short flight ban after incorrect data appeared in its flight planning system, this second incident ended up costing the firm dear as its shares fell more than 1.5% in the day’s trade.

Network Performance Defines your Reputation

It may seem surprising that something as simple as a router malfunction was enough to derail operations at United and generate serious financial and operational consequences.

But today’s networks are constantly evolving and becoming ever more complex – incorporating wired, wireless, physical, cloud, virtual, hosted, on-premise, and hybrid systems and applications. What’s more, IT departments are struggling to cope with escalating IT security and regulatory compliance demands – while battling the challenge of hidden threats generated by BYOD, rogue and non-sanctioned devices.

So, while the United Airlines story demonstrates just how dramatically things can go wrong in a moment, it does put the spotlight on the bigger question of just how well prepared the majority of organizations are when it comes to assuring network and application performance.

It also highlights why efficient and effective network and application monitoring tools are becoming a ‘must have’ in a world where the non-availability of a network frequently equates to ‘no business’ – and results in frustrated customers, disgruntled employees and perplexed partners.

Downtime is Not an Option

Today’s IT pros are being tasked with the increasingly difficult job of keeping their organization’s network running effectively and efficiently. Ensuring business continuity is the name of the game – because the real cost of downtime is crippling.

According to Dunn & Bradstreet, the productivity impact of downtime alone is estimated at more than $46 million per year for a Fortune 500 enterprise. And while the exact hourly cost of downtime for a midsized business may be lower, the proportional impact is much larger. As organizations continue to automate and depend on the network to get business done, the availability and performance of critical systems is a deal breaker.

In today’s non-stop hyper-connected world, systems must be up 24/7. And if IT can’t pinpoint problems fast, the business impact can be crippling. To minimize risk and the cost of downtime, IT teams need to be able to mitigate issues before users are impacted, rapidly finding and fixing any problems that do occur.

Are You Monitoring the Entire Network?

With IT complexity growing every day, at an almost exponential rate, current strategies, resources and personnel may not be enough to keep pace. With the IT team juggling reduced headcounts and/or budgets – while being tasked with delivering more for less – the pressure is on like never before to continuously monitor and manage every aspect of the network.

One thing is clear. IT departments simply can’t afford to make the slow trudge through an entire suite of solutions hoping to find the root cause of a problem. With 70% of organizations reporting in a recent survey that a critical network event took at least one business day to resolve, achieving a comprehensive real-time view of network and server performance, availability and health has never been more important.

United Airlines Meet Unified Monitoring

As the pressure mounts on IT to meet availability targets, automated and unified network monitoring becomes essential as a means to understand, monitor and inform IT teams about a network’s makeup, health, and potential and actual problems.

Today’s IT teams need tools that solve real problems, install easily and don’t require huge teams of experts to configure. Making it easy to get up and running in hours – not days or weeks – and quickly discover the network and its dependencies. These tools need to deliver real-time monitoring and early warning alerts, so IT can respond fast before a minor issue escalates into a full blown problem.

In the case of United Airlines, where a single router brought operations to entire halt, a review of the airline’s network infrastructure, monitoring tools and management should help to identify where the original issue lay.

But other organizations would do well to consider how well they’re tackling the growing challenge of keeping their network running efficiently and effectively.

eeeGeoEngineers is a small business with enterprise demands and a lot of pressure on its network. As an engineering firm with 400 employees located in 12 offices, managing the network presents many unique challenges. For example, its earth science engineers must be able to back up project files and submit data from the field at any time of day and from any location.

Mitchel Weinberger, a systems engineer at GeoEngineers, is responsible for providing 24/7 network uptime. Providing every employee with on-demand access to the files they need, when and where they need it, is vital to business operations.

If the network is not properly managed, the company’s engineers are stuck waiting to access the data they need instead of working productively. As a firm that bills by the hour, GeoEngineers can’t charge clients for time lost waiting for a download or upload. The direct result of poor network performance is less revenue for that period, along with far-reaching cascading effects, such as projects running behind schedule.

Tasked with an important responsibility, Mitchel started exploring different solutions to solve these unique challenges.

Network Delays had to End

Mitchel analyzed his key challenges and narrowed them down to the top two. The first was an inability to identify high bandwidth users or applications. The second top challenge came from having no way to easily monitor server issues, such as CPU and memory consumption.

Both of these challenges created an environment in which employees were unable to reliably access the information they needed.

As a firm with projects in diverse fields, such as oil exploration and environmental cleanup, data was often gathered out in the field and sent back to the office. Network delays hampered the engineers’ work and were a source of frustration.

After due diligence research, Mitchel decided upon Ipswitch WhatsUp Gold along with our Flow Monitor tool to provide real-time monitoring of the network and send bandwidth usage alerts.

Finding Network Bandwidth Problems

WhatsUp Gold provides real-time network monitoring, which, combined with Flow Monitor, easily identifies network issues. Internal issues generate an alert for Mitchel and his team to prevent them from worsening. If there’s an issue with the service provider, Flow Monitor provides the data that can be sent to the ISP to quickly resolve the problem.

Server monitoring, customizable dashboards and filtered alerts let sysadmins receive only pertinent information. Coupled with a custom interface that shows the real-time status of the network and servers, Mitchel now had time to address his core challenges. High bandwidth users or applications had become visually identifiable, combined with meaningful alerts that increased meantime to resolution. And monitoring the status of servers and the network became as simple as opening a dashboard.

Lowering the ISP Bill

When GeoEngineers first setup its offices, the company installed a 50 Mbps line at every location. Knowing the importance of providing all the bandwidth that the engineers would ever need, it seemed at the time to be a sensible decision.

Once Mitchel had WhatsUp Gold and Flow Monitor in action, he noticed that not all GeoEngineers locations needed as much speed as others. While eight offices have teams of 15 to 100 people, four satellite offices only had a few people in each. That allowed Mitchel to decrease the bandwidth in locations that never approached the 50 Mbps limit. That decision reduced monthly operating costs.

To discover the deeper details about the outcomes GeoEngineers experienced, head over to our recording of Mitchel’s recent talk at Ipswitch Innovate Virtual Summit. Mitchel also provides in depth technical details about how Ipswitch solutions addressed GeoEngineer’s needs.

Here at Ipswitch I have the pleasure of working closely with U.S. Government agencies to help them sort out their most pressing network monitoring challenges. ThinkstockPhotos-455634241b_SMALLGovernment IT pros know they need tools to monitor their networks but have a difficult time choosing one. And once they buy what seems right, it can end up costing even more time and money trying to get it to work right.

It doesn’t have to be this way. Government agencies can’t afford to spend money on features they’ll never use, or training they shouldn’t need. And they certainly can’t afford to struggle on their own trying to collect data about their networks and applications. What’s worse, oftentimes the employees charged with purchasing a solution are told to buy something to meet certain requirements based upon problems encountered. But that’s all they sometimes get, with no further guidance on what type of product can solve the problem.

Ever-present budget constraints means agencies have to be as cost-conscious as possible. Think LPTA (or “Lowest Priced Technically Acceptable”) which dictates products acquired by government agencies must meet required technical capabilities at the best price. That can make finding the right software even harder.

Ipswitch has a long history and strong footprint in the U.S. Government. Our WhatsUp Gold network monitoring tools are used by the U.S. Department of Defense and other U.S. Federal agencies. They all use our products to connect securely and receive an unimpeded flow of data pertaining to their classified and standard networks. These networks don’t all reside in government buildings in D.C.. Some are simply self-contained within an active warship or in a tent out in the field.

Many of the government agencies that come to us for help have been using a similar product but grew tired of deployment issues or even figuring out how the software works. This just illustrates how ease-of-use goes a long way in determining if a government agency can stay under budget and hit the ground running. Government agencies are very focused on digitizing and virtualizing services to improve operational efficiencies and reduce costs. They can’t lose money and time trying to learn how to manage and monitor IT resources.The employees charged with purchasing a solution are told to buy something to meet certain requirements based upon problems encountered. But that’s all they sometimes get, with no further guidance on what type of product can solve the problem

It’s enjoyable to see our government customers’ reactions when they have our software up and running within a day. And not only implementation within a day, but also generating analytics and information to help them make informed decisions. And they’re doing this before any type of formal product training. That’s something no IT pro will complain about.

One of the biggest challenges for any IT team is figuring out their network assets and inventory, or managing configuration changes. Tune-in Tuesday December 8 for a webcast at 2pm US ET when Michael Roth, senior systems engineer at the University of North Georgia, will share his best practices for effectively managing network inventory and configuration changes.

web promo


Today’s SMBs are generally more security-conscious than their 20th-century counterparts, and actively take steps to prevent data loss. Unfortunately, however, mistakes are still made at the employee level that are seldom accounted for when designing protocols.

In the ’80s, the ‘hilarious’ free cupholder email prank (with an executable attachment) kicked out the tray of an employee’s optical drive. They all laughed as IT professionals cringed, knowing full well this innocuous result could spread a virus throughout their network and yield a loss of data from keyloggers. Well, they probably wouldn’t laugh today.

Unfortunately, while today’s users are often savvy enough not to launch executable files received by email, little has changed and human error remains the most common cause of data breaches. In fact, it was responsible for over 90 percent of all reported breaches in the Verizon 2015 Data Breach Investigations report. Even though a new workforce is impractical for a growing business, a problem-free IT infrastructure needs to be realized. One approach is to construct a risk management team to train employees according to a defined company security policy. The risk management team can then perform risk assessments to identify potential challenges and lock down each one, amending the policy as new threats are identified.

Complicated by new technology, increasing data volumes, bring your own device (BYOD), mobility, the cloud, Internet of Things (IoT) and more, the threat landscape is increasingly difficult to manage. But it’s easy to classify.

Unintentional and Internal

This is generally an issue of workforce literacy. Adherence to the company’s security policy and making staff aware of cybercriminals’ most common attack methods can substantially reduce this problem. How? Some of it you may already be enforcing:

  • Due diligence when opening email attachments
  • Awareness of phishing red flags
  • Use of the public cloud or mobile apps that aren’t company-approved
  • Weak account passwords that are never changed. Social engineering on social media (or other publicly displayed platforms of data) can lead a hacker to one’s ‘secret’ password or security question
  • The loss or theft of a physical device, such as a smartphone
  • Low- or no-tech methods such as dumpster-diving, shoulder-surfing or poor building security that allows for direct network access

Intentional and Internal

This is more difficult to deal with, given the countless methods available to disgruntled staff for sharing and gathering data. Think about audio and video via mobile devices, as well as cloud storage, unified communications (UC), file sharing, free email accounts and more. Even a simple printout of account passwords can result in data loss.


It goes without saying that firewalls, threat intelligence systems, antivirus software and the like should be in place. It’s also important to monitor network traffic for discrepancies and to ensure that any file transfers taking place are protected, encrypted and controlled for user access via employee role — usually with an audit trail for compliance purposes. When correctly configured, your file transfer solution will integrate into your data-loss prevention system and facilitate recovery if file transfer is interrupted. Installation of application updates and security patches should also be in your back pocket regularly, as hackers are quick to exploit vulnerabilities in outdated software.

Fires Start Small and Hard Drives Aren’t Perfect

Data loss is not limited to the actions performed by the malicious or ill-informed; there are also hardware issues to consider. Like cybercrime, hard-drive failure will happen — it’s just a matter of when. Putting all your code in one line is never recommended and, in the event of a natural disaster (fire, flood, you name it), offsite and onsite backup is essential.

Old hard drives may make attractive coasters or works of art, but remember: Most of them, even if fire- or water-damaged, can be recovered using skilled data forensics techniques. When donating computers to charity or a local school, it’s best to degauss or physically destroy hard drives if they’ve ever contained confidential data. Your donation may not be as well received, but your data is secure.

Preventing data loss is an ongoing task and regular staff training is necessary as new threats appear daily. Only by being vigilant can companies protect themselves and prevent penalties from government or industry bodies that call them out on lack of compliance.