Ipswitch Blog

Where IT Pros Go to Grow

Our Latest Posts

Ipswitch’s FTPS server gave the Broncos the defense they needed for protecting data in motion.

Data Security a Huge Issue for NFL Teams

After a season of highs and lows, the Denver Broncos are headed to Super Bowl 50 to face the Carolina Panthers. But teamwork, dedication and hard work aren’t the only things that contributed to the Broncos’ surge to the NFL’s championship game.

The amount of data generated by an NFL team is staggering. Besides statistics, plays, strategies and a crunch of information that would make some quarterbacks’ heads hurt, the business of running a professional sports team requires videos, photos and graphics to be distributed to special events, marketing and fans relations partners.

Because of email and private network restrictions, all of this data used to be downloaded to discs, thumb drives or hard drives. They would then be delivered by hand to players, coaches and other important members of the Broncos team.

WS_FTP is Broncos’ Choice for an FTPS Server

The franchise’s use of Ipswitch WS_FTP Server, a FTPS (file transfer protocol secure) server,  gave it a great defense for protecting data in motion. This data includes plays, high-definition videos, graphics and more to players, coaches and business partners. You could argue file transfer capabilities didn’t directly get the Broncos to the biggest game in football, but it certainly didn’t hurt.

But this process was time-consuming, inefficient and not to mention a huge data security risk. Ipswitch’s WS_FTP Server  came to the rescue the same way Brock Osweiler saved the day – or at least didn’t blow it – this past season when quarterback Peyton Manning missed some of the action with an injured foot.

Unlike Osweiler, who subbed for Manning only temporarily, WS_FTP Server was a permanent solution to the Broncos’ file transfer woes. WS_FTP Server is secure enough to keep confidential team information out of the wrong hands – some would unfairly imply out of the New England Patriots’ hands. It’s also powerful enough to handle the influx and growth of data, and gives ultimate visibility and control for top achievement.

Another key quality of WS_FTP Server is its uninterrupted service that increases uptime, availability and consistent performance with a failover configuration. Unlike the Microsoft Surface tablets that failed the Patriots during the recent AFC Conference Championship, WS_FTP Server won’t go down, or leave the Broncos’ files in limbo, unprotected and undelivered.

NFL Becoming a Technology-Driven Business

The NFL’s need for quality IT service goes beyond devices displaying plays and diagrams. File transfer played a role in the way football went from throwing a pig skin down a grassy field to being a technology-driven business.

By providing partners with just a username and password, transferring files is completed in just a few clicks. So before the Broncos head to Santa Clara for the big game, the team can rest easy knowing its files are secure and accessible by all players, coaches, team executives and business professionals keeping the team running smoothly.

Read the Ipswitch File Transfer Case Study: Denver Broncos

We’ll find out Sunday if the Broncos and Manning can beat the tough Panthers, if the commercials will make us laugh and if Beyoncé and Coldplay will dazzle with their halftime show. But one thing the Broncos and all Ipswitch customers will always be assured of is the success, security and compliance of WS_FTP Server file transfer solution.


Is There Such a Thing as too Much Visibility?
Sometimes broad visibility can make it hard to see

Every day, many of us commuters have visibility issues and are at the mercy of unpredictable traffic. Often I have to leave a LOT of buffer to get to work in case I had an important meeting. Luckily there are tools out there that can give me good traffic visibility at my fingertips. For instance, I rely almost entirely on Google Maps to “predict” how long is it going to take me to get to work or to any other place for that matter. This type of visibility is crucial when things go wrong, like an accident.  Google Maps would reroute me or at least give me a revised ETA so that I can make any adjustments.

Fix Before You Fail

Is there an analogy to this in the online world? You would think that service providers and large enterprises would have this level of visibility into their networks. So, when things go wrong, like a device failure, they can pin point the root cause right away and take corrective action. Better yet, they can be ahead of the game by watching any performance bottlenecks or warning signs of failures and fix the issues before the end users are affected.

BT Broadband Network Outage is a Lesson for SMBs

So, when the very large BT broadband network went down today, I wonder if there is such a thing as too much visibility. Despite the service provider level of visibility they have, it took BT almost two hours to get all their customers back on line. Now imagine if you were an SMB or a mid-sized organization faced with a similar outage. Without sufficient visibility into the problem, your network could be down for hours, costing you, your employees, and customers, significantly in terms of revenues, productivity, and reputation.

How can today’s SMBs get service provider level visibility that won’t break the bank?  Here are some pointers:

  • Invest in a network monitoring tool that can discover all of your critical infrastructure
  • Make sure that the tool can provide insight into availability, performance, and security of your infrastructure
  • Choose a tool that is broad enough to support multiple monitoring technologies (e.g. SNMP, WMI, network flows) your entire infrastructure (network devices, servers, wireless devices, applications, virtual machines, etc.)
  • Ensure that the tool can give you proactive insights as well as reactive alerts
  • Consider the total cost of ownership of the tool from when you deploy it to as you maintain it. Remember that DIY is not always free over the lifetime of owning the tool
  • Do not “under monitor” during the evaluation of the tool. Develop a monitoring configuration that reflects the entire production network, not just the subset suitable during the evaluation

mobile-device-hackerDid you know your mobile phone and wearables are just as appealing to hackers as your online bank account? No one is impervious to increasingly sophisticated mobile device hacking. Case in point, James Clapper, the U.S. director of national intelligence (DNI), had his phone hacked last month with calls rerouted to the Free Palestine Movement. And in October 2015, CIA director John Brennan’s mobile device fell victim to the activity of a group of “pot-smoking teenagers.” Bottom line? Not even next-gen hardware is completely safe.

So long as support enforces two-factor authentication and staff doesn’t access free Wi-Fi hotspots (especially when handling business data), a mobile phone should be safe, right? Nope. As noted by Dialed In and Wired, determined hackers do a lot more with your mobile and wearable technology than you may realize.

Mobile Phones: Hackers’ Best Friend

Any iPhone newer than the 4 comes with a high-quality accelerometer, or “tilt sensor.” If hackers access this sensor and you leave the phone on your desk, it is possible for them to both detect and analyze the vibration of your computer keyboard and determine what you’re typing, with 80 percent accuracy. So, say you type in the address of your favorite banking web portal and then your login credentials; hackers now have total access.

App developers have wised up to hackers targeting microphones and made it much more difficult to gain access without getting caught. Enterprising criminals, however, have discovered a way to tap a phone’s gyroscope. This lets the user play Angry Birds or any other orientation-based program and detect sound waves through it. So, next time you talk about finances with your significant other while three-starring a new level in your go-to mobile game, you may also be giving hackers the information they need to steal from you.

Targeting RFID Chips

In an effort to make retail purchases easier and more secure, many credit cards come equipped with RFID chips. Smartphones, meanwhile, include near-field communication (NFC) technology that allows them to transmit and receive that RFID data. The risk, here, is that hackers who manage to compromise your phone can leverage malware to read the information from a card’s RFID chip if you’re storing it in a nearby wallet or card-carrying mobile case. Then they can make a physical copy. You’re defrauded and don’t even know it.

“Say Cheese”

Mobile cameras have also come under scrutiny, since hacking this feature lets attackers take snaps of you or your family whenever and wherever they want. Despite improvements in basic phone security, though, it’s still possible for malicious users to take control of your camera. It goes like this: Operating systems like Android now mandate that a preview of any new photograph must be displayed on-screen, but don’t specify the size of this image. As a result, cybercriminals can take surreptitious photographs and then send them to anyone at any location.

MDM Leads to Risk

A large number of smartphones contain weak mobile device management (MDM) tools installed by carriers. And although reaching these tools in a target phone requires close proximity and the use of rogue base stations or femtocells, the risk is substantial. Attackers could take total control of your device.

Fit or Foul?

Mobile phones aren’t the only technology at risk; wearables are also open to attack. What can hackers do to these devices? Back in March 2015, wearable maker Fitbit was notified by researchers that their device could be hacked in fewer than 10 seconds. While initial reports focused on logical changes such as altering steps taken or distance walked, as noted by The Hacker News, it wasn’t long before hackers discovered a way to inject malware that potentially spreads to all synced devices.

Potentially Lethal Consequences

Security flaws in wireless-enabled pacemakers could allow hackers to take control of (and then stop) this critical device as well. In September 2015, a team from the University of Southern Alabama managed to access a functioning pacemaker and “kill” a medical mannequin attached to the device.

Medical devices such as insulin pumps and implantable defibrillators have notoriously weak security — a lack of encryption and weak or default passwords, in particular — of which cybercriminals can easily take control. The result? Delivering a fatal drug overdose or shocking perfectly healthy patients without warning.

Be Diligent About Mobile Security

The lion’s share of existing security issues stem from poor app development in mobile and wearable devices. Mobile device developers prioritize speed over security and eschew critical features such as encrypted commands, limited application sessions and disabling repeat requests. And while recognizing these flaws is the first step to improving mobile safety, users need to be aware of today’s risk factors. Right now, hackers can do far more with a mobile or wearable than the user may realize.

In the early years of IT, data was stored on paper tapes

What did an IT position look like in the ’70s, ’80s and ’90s? Far fewer mobile endpoints, for one thing. With respect to today, the history of information technology boasts some surprising differences in day-to-day tasks and the technology that was available. IT support has come a long way, folks.

How Far Back?

IT has been around almost as long as humans. If you think about it, hieroglyphics are just a script devs don’t use anymore. Mechanical devices such as the slide rule, the Difference Engine, Blaise Pascal’s Pascaline and other mechanical computers qualify as IT, too. But this particular journey begins well into the 20th century.

The 1970’s: Mainly Mainframes

Computers of this era were mostly mainframes and minicomputers, and a history of information technology wouldn’t be complete without mentioning them. IT job roles included manually running user batch tasks, performing printer backups, conducting system upgrades via lengthy procedures, keeping terminals stocked with paper and swapping out blown tubes. IT staff was relegated mainly to basements and other clean rooms that housed the big iron. System interconnectivity was minimal at the time, so people had to bridge those gaps themselves. This was the motivation behind the Internet (or the ARPANET, as it was known then).

The 1980’s: Say Hello to the PC

This decade saw the growth of the minicomputer (think DEC VAX computers) and the introduction of the PC. Sysadmins crawled out of the basement and into the hallways and computer rooms of schools, libraries and businesses that needed them onsite. The typical IT roles at this time consisted of installing and maintaining file and print servers to automate data storage, retrieval and printing. Other business roles included installing and upgrading DOS on PCs.

If you worked in a school, you saw the introduction of the Apple II, Commodore 64 and, eventually, the IBM PC. But the personal computer was more expensive, deemed for business use and not deployed in schools very much. It was the Apple II that propelled the education market forward and, if you worked support at a school in the ’80s, you knew all about floppy disks, daisy wheel printers and RS-232 cables.

The 1990’s: Cubicles, Windows and the Internet

This generation of IT worked in cubicles (think “Tron” or “Office Space“), often sharing that space alongside the users they supported. Most employees were using PCs with Windows by this time, and IT support was focused on networking, network maintenance, PC email support, Windows and Microsoft Office installations — and adding memory or graphics cards for those who needed them.

Toward the end of the decade, the Web’s contribution to Internet connectivity became arguably the most requested computing resource among growing businesses. Although there was no Facebook, Twitter or LinkedIn yet (Friendster would kick off that trend in 2002), employers still worried about productivity and often limited Web access. Oh, and if you could go ahead and add modems to PCs, run phone lines for those who needed dial-up access and Internet-enable the business LAN, that would be great.

Today’s IT: Welcome to Apple, Patch Tuesday and BYOD

Today, recent IT job roles have included the rebirth of Mac support, the introduction of social media (and the blocking of its access at work), constant security patches (Patch Tuesday on Windows, for instance), the advent of BYOD and DevOps automation.

The continued consumerization of IT (essentially now BYOD) meant that IT pros had “that kind” of job where friends and family would ask for help without pause. The one common thread through the years? The growth of automation in the IT role — something that will continue to define tomorrow’s helpdesk.

Image source: Wikimedia Commons

how it pros can navigate through a job interview
What can you do to make the IT job interview go well?

You’ve landed an IT job interview. That’s the good news. Now you have the interview itself, and let’s be honest, it’s never fun. Most candidates don’t like putting on a show of the software and protocols they’re familiar with. Even actors aren’t in love with auditioning. The “social” aspect of recruitment isn’t something you should need to ace for an admin position, but it has to be done.

If the job is a really good one — the technical work that’ll challenge your current support acumen (and compensate you well for the weekend maintenance) — you probably have a bit of an imposter complex even just applying. When the “ideal candidate” is an infosec wizard, how dare you present yourself? But hey, you believe you can do it, and the pay is great. So read that magazine and wait to be met.

Find Strengths in Technical Weaknesses

What can you do to make the IT job interview go well? Some things should be no-brainers, but there’s a reason think pieces keep pounding them into your head (present article excluded). Don’t be “creepy” with company research, advises InformationWeek, and don’t dress for the beach unless an offbeat SMB suggests otherwise. Do pay attention to the job description, though (don’t ask questions it already answered), and learn enough about the employer to imply a healthy interest.

Ultimately, play to your strengths. Lawyers have a saying: If the facts are against you, argue the law; if the law is against you, argue the facts. If you don’t have hands-on experience in data center migration, stress your credentials in bandwidth control during this process. Show that you know what’s involved in secure file transfers even if you haven’t managed them offsite. If your formal credentials are thin, play up your experience in the network trenches during the Super Bowl traffic spike.

Be Mindful of the Interviewers Who Don’t Work in IT

With luck, your interview with an IT rep will find some common ground. There may be scripts you’re both comfortable reading or security issues you should both be following. This will give you the chance to talk like a human as well as what the job will involve. One of the bigger challenges of an IT job interview, however, is that you may also meet someone from the business side. This guy knows only vaguely what network monitoring tools are and is probably a bit intimidated by the idea of bandwidth or network latency. In other words, they probably feel like the imposter, interviewing someone for a seat in ops they don’t fully understand.

But one thing you definitely don’t want to do is remind the interviewer of their own uncertainties. Talk confidently about the work, without going so deep into the technical weeds that the interviewer isn’t sure what you’re saying. Although this shorthand may demonstrate fluency in a multi-vendor environment, it can also suggest you can’t communicate well with the other departments.

You’re a Social Animal

For better or worse, a job interview is a social interaction. Some sysadmins and IT pros would gladly trade the spotlight for wrestling with a wonky script or normalizing office bandwidth.

Nonetheless, this can produce a disconnect. As one IT candidate reported by Dice.com said when asked to describe the ideal work environment, “I just want a job where I can go in a room, do my work and be left alone.”

That candidate probably speaks for many admins, developers, and other overworked helpdesks, but he didn’t get the job. Business people (including those who work for nonprofits and government) tend to celebrate charisma, and for good reason: The job is all about meeting client needs, which means talking to the customer to understand what they really want.

The good news? Your competition is other techies, probably just as geeky at heart.

The bottom line is that if you’re comfortable about your qualifications for the job — even if it is pushing your limits — that confidence will show through, and help you navigate the rocky spots. And who knows, you may be just who they’re looking for.

best practices network mapping

In this blog, part of our series on IT best practices, I’ll share how network mapping works and how it will give you a complete vantage point of your entire network.

Modern networks are full of connected devices, interdependent systems, virtual assets and mobile components. Monitoring each of these systems calls for technology that can discover and map everything on your network. Understanding and enacting the best practices of network mapping will guarantee successful network monitoring.

An Overview of Network Mapping

Most forms of network management software require what’s known as “seed scope,” which is a range of addresses defining the network – a network map. Network mapping begins by discovering devices using a number of protocols such as SNMP, SSH, Ping, Telnet and ARP to determine everything connected to the network.

Adequately mapping a large network requires being able to make use of both Layer 2 and Layer 3 protocols. Together, they combine to create a comprehensive view of your network.

The Two Types of Network Maps

When discussing network protocols, they are broken up into two categories, or layers:

  1. Layer 2: Defined as the “data link layer,” these protocols discover port-to-port connections and linking properties. Layer 2 protocols are largely proprietary, meaning the universal Link Level Discovery Protocol (LLDP) must be enabled for every network device.
  2. Layer 3:  Defined as the “network layer,” these protocols explore entire neighborhoods of devices by using SNMP-based technology to discover which devices interact with other devices.

Surprisingly, most IT infrastructure monitoring solutions rely solely on Layer 3 protocols. While this succeeds in creating a comprehensive overview of the network, successful network mapping practices call for using Layer 2 protocols as well. Layer 2 protocols provide the important information about port-to-port connectivity and connected devices that allow for faster troubleshooting when problems arise.

Conveniently enough, Ipswitch WhatsUp Gold uses Layer 2 discovery with ARP cache and the Ping Sweep method, combined with Layer 3 SNMP-enabled discovery methods to provide all the information needed to quickly identify and address problems.

Creating Network Diagrams

Network diagrams make use of the data generated by Layer 2 and Layer 3 protocols, and are super helpful for visualizing the entire network. One important best practice for network mapping is using network diagrams to ensure that the existing networks and IT processes are fully documented – and updated when new processes are added.

Microsoft Visio is the leading network diagramming software on the market. When data is imported, Visio allows for creation of robust, customizable diagrams and easy sharing of them between different companies. Yet, network managers who rely on Visio quickly discover that the lack of an auto-discovery feature severely limits its use.

Ipswitch WhatsConnected was created to solve this problem by auto-generating topology diagrams, which can be useful on their own or exported to Visio, Excel and other formats with a single click. WhatsConnected makes use of Layer 2 and Layer 3 protocols to provide Visio with everything in needs to generate the powerful diagrams its known for.

Instituting solutions that follow these suggestions should provide the foundation needed for real-time network monitoring. Coming up next in our best IT practices series, we’ll review network monitoring. Learning how to make the most of network discovery and network mapping will give your organization cutting-edge network monitoring capabilities.

Related articles:

Best Practices Series: Network Discovery

Best Practices Series: IT Asset Management

Football is no longer simply a game played on grass or turf — it's now awash in tech.

Things on the gridiron have changed. Once the province of paper-based play analysis, complicated hand signals and rules reliant on the eyes and ears of human refs, football is now awash in tech. Just take a look at the broken Surface tablets from last week’s AFC championship. With the Panthers and Broncos squaring up for Super Bowl 50 next week, here’s a look at the NFL technology (and IT teams behind it) that help elevate the sport while keeping its time-honored traditions intact.

It starts at Art McNally GameDay Central, located at NFL Headquarters in New York City. From here, Game Operations staff are tasked with prepping every communication and broadcast system before gametime while checking for radio frequency conflicts and handling failures prior to air. From a corporate standpoint, the GameDay crew is analogous to CIOs and their admin staff; they get the “big picture,” ensuring sysadmins on the ground have the information necessary to get their jobs done.

Clean Frequencies

Key to Game Ops is keeping radio frequencies clean. As the number of licensed bandwidths approved by the Federal Communications Commission (FCC) continues to grow, fewer clear channels exist for team officials and their support staff to use. With this in mind, operations must make sure both teams, their coaches and all TV network crews use the right bandwidth spectrum for headsets, microphones and any Wi-Fi connections to prevent accidental “jamming.” Jamming often leads to signal loss at a critical moment.

Operations staff are also responsible for ferreting out any “not-so-accidental” frequency interruptions; the New England Patriots’ “Headsetgate” comes to mind, especially since the team regularly shows potential as a Super Bowl contender. Did they really tamper with headsets? Maybe, maybe not — there have been a number of accusations over the past few years — but what matters for Super Bowl 50 is that Game Ops staff are up to the challenge of tracking down any technical issues regardless of origin or intent.

‘Instant’ Replay

Game Ops staff are also responsible for overseeing the use of NFL Instant Replay technology, which got its start in 1986, was removed in 1992 and then reimplemented in 1999. GameDay teams use the league’s proprietary NFL Vision software to analyze replays and communicate with both the stadium’s replay official and the referee before he goes under the hood — both of which shorten the length of a replay review. Think of it like analytics; the NFL is investing in software that can capture relevant data, serve it up to experts and empower users in (or on) the field.

On the Ground

Crews in the stadium during Super Bowl 50 are responsible for managing a few new pieces of hardware, including Microsoft Surface to analyze defensive and offensive formations. But because these tablets have no Internet access and their software cannot be altered, the league is currently testing a “video review” feature which may be implemented in future seasons.

Not everything works perfectly, though. As noted by Geek Wire, a problem during the December 8, 2015, matchup between Dallas and Washington forced these tablets out of service and left coaches with pen-and-paper diagrams. And on January 24, 2016, in the AFC Championship game, the Patriots suffered significant tablet malfunctions causing more than a few frustrations on the sidelines, especially since the Denver Broncos weren’t required to give up their still-working tablets under the NFL’s “equity rule”. February’s onsite IT will need to not only monitor the performance of the Sideline Viewing System, but its connection to their team’s tablet. System monitoring comes to mind here: Small precursor events in still-picture taking or tablet connections could act as warning signs for larger problems, if caught early enough.

Real-Time Stats

There’s also a need for data aggregation as the league moves toward full adoption of M2M systems like Zebra player tracking. Using RFID chips in each player’s shoulder pads, it is now possible to track their movements in-game in real time, then provide “next-generation stats” for fans. The larger value, however, comes in the form of actionable data obtained by recording and mining the information collected by these sensors. NFL technology professionals are tasked with not only ensuring this data stream is uninterrupted, but also making something useful out of the final product — a curated version of player data that trainers can use to improve Super Bowl performance.

Data Encryption

NFL teams need to transfer highly sensitive files containing details regarding trades, play books, and player contracts. In the past, the Denver Broncos used thumb drives and CDs to physically pass around large data files including that containing high-res video and image files. It was a manual and unstructured process that proved to be a time waster, lacking even basic security controls. Email was not an option because of the file size since most IT teams limit the size on email attachments.

In order to secure their data in motion and move it without hassle, regardless of the size, the Broncos picked Ipswitch WS_FTP software for secure data transfer internally between departments, and externally with partners.

A New Career?

Interested in working support for the NFL? It’s possible: While the Cleveland Browns are hiring an intern, the Washington Redskins need help at the helpdesk and the Seattle Seahawks are looking for a CRM data analyst. Interestingly, the job descriptions read like standard tech sector advertisements; NFL clubs have become enterprises in and of themselves, requiring multiple teams of backend IT personnel in addition to those on the ground during regular and postseason play.

Even the NFL is not all glitz and glory for IT. In fact, the league’s mandate is similar to most tech firms: Keep systems up and running while collecting and curating actionable data. Ultimately it’s a team effort — the work of many, not the power of one, moves the chains and snatches victory from the jaws of overtime.

ipswitch community
Join the Ipswitch Community today!

When your network goes down or your computer isn’t operating as it should, sometimes the best thing to do is reboot. It’s often the first solution to troubleshooting poblems. We took this notion and applied it to our Ipswitch Community. This month, we relaunched and combined our Ipswitch communities into one.

As IT pros know, an online community is a powerful tool, allowing folks to connect, learn and share thoughts, problems and ideas. With this in mind, we wanted to create a community where our customers and other IT pros can come together to give feedback about our products and services, ask questions, relate their own findings and build a network of other users.

Uniting Product Resources on the Ipswitch Community

The Ipswitch Community has different spaces for different products, such as WhatsUp Gold and File Transfer, but unites all these resources in one place. The Community also is connected with the knowledge base, for self-help, and links to additional support resources. So no matter how a customer wants to solve an issue, the full arsenal of tools is available.

The new Community experience has been simplified to make it much easier to use and get to where you need to go. This easy-to-use community is meant to make it easier for existing members to interact and to attract new community users.

How to Get Involved and Join the Conversation

Join the Ipswitch COmmunity
Join the Ipswitch Community today!

Come visit today and get involved. Community moderators have even provided tips on how to ask effective questions to get the most out of the community. My “Getting started with the community” post gives you useful links and tips like; how to set up an account, update your profile, remember to read the Community charter and how to create better questions and ideas using detailed descriptions, brief language and images helps get to your point quicker and get more attention.

I think our Community charter sets a few reasonable guidelines. Requiring visitors to use real names and photos ensures they are interacting as people on the site. Constructive criticism is encouraged as it can establish a productive dialogue. And we do hope that all of our community members play nicely with others.

Beyond the basic facilities of forums, question asking and connection, active community members can get involved in feedback groups and beta testing, and talk with our product and UX teams. Community member involvement is a great way to hear from our customers and others while we strive to create great products and services.

Our Community is here for folks to learn together and provide an outlet for questions, concerns and insight. Join today to find out how you can get closer to each other, my colleagues and our products.


personal healthcare information

This Thursday, January 28th is Data Privacy Day (aka Data Protection Day in Europe).  The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. To honor Data Privacy Day, here are some ways you can protect personal healthcare information (PHI) in-motion, an area of focus for healthcare IT teams handling PHI.

Personal Healthcare Info is a Hacker’s Dream

PHI is considered to be the most sought after data by cyber criminals in 2016. Hackers are moving away from other forms of cyber crime such as that which targets bank accounts. Instead they are focusing more on PHI due to the amount of data contained within it. Valuable data within PHI includes social security numbers, insurance policy info, credit card info, and more.

The lack of a consistent approach to data security throughout the healthcare industry also makes healthcare data easier to obtain. The easier it is to steal, the more lucrative the data becomes to hackers. The healthcare industry has had less time than others to adapt to growing security vulnerabilities, and online criminals don’t take long to take notice.

GDPR and the End of Safe Harbor

It’s not news that governments around the globe are doing their part to promote data privacy. They are doing this by legislating data protection of personal data, and reinforcing with significant penalties for non-compliance.  Check out the recent agreement on the European Data Protection Regulation as the most recent example.

What is changing, however, is the rapid growth in data integration across the open Internet between hospitals, service providers like payment processors, insurance companies, government agencies, cloud applications and health information exchanges.  The borderless enterprise is a fact of life.

Using Encryption to Meet Data Privacy Regulations

It’s well known that a security strategy focused on perimeter defense is not good enough. For one reason, healthcare data must move outside its trusted network.  Encryption is the best means to limit access to protected data, since only those with the encryption key can read it. But there are other factors to look at when considering technology to protect data in motion, particularly when compliance with HIPAA or other governmental data privacy regulations is an issue.

Briefly when evaluating cyphers for file encryption, described in FIPS 197, its important to consider key size, eg 128, 192 or 256 bit, which affects security.   It’s also worth considering products with FIPS 140-2 certified cyphers accredited for use by the US government as an added measure of confidence.

Here are several other things to consider to protect data in motion and ensure compliance:

  • End-to-end encryption: Encrypting files while in-transit and at rest protects data from access on trusted servers via malware or malicious agents with secure access to trusted network
  • Visibility for audit: Reports and dashboards to provide centralized access to all transfer activity across the organization can reduce audit time and improve compliance
  • Integration with organizational user directories: LDAP or SAML 2 integration to user directories or identity provider solutions not only improves access control and reduces administrative tasks, but can also provide single sign-on capability and multi-factor authentication
  • Integration with other IT controls: While data integration extends beyond perimeter defense systems, consider integrate with data scanning systems. Antivirus protects your network from malware from incoming files and Data Loss Prevention (DLP) stops protected data from leaving.
  • End-point access to data integration services: There are more constituents than ever that participate in data exchange. Each has unique needs and likely require one or more of the following services:
    • Secure file transfer from any device or platform
    • Access status of data movement to manage Service Level Agreements (SLAs)
    • Schedule or monitor pre-defined automated transfer activities
  • Access control: With the growing number of participants including those outside the company it’s more important then ever to carefully manage access with role-based security.  Ensuring each have appropriate access to the required data and services.
  • File transfer automation: Automation can eliminate misdirected transfers by employees and external access to the trusted network.  Using a file transfer automation tool can also can significantly reduce IT administration time and backlog for business integration process enhancement requests.

Become Privacy Safe Starting with This Webinar

Protecting PHI within the healthcare system doesn’t have to be painful for hospital administrators or doctors to appropriately access PHI, but it does mean having the right technology and good training in place. And in honor of Data Privacy Day, don’t you want to tell your customers that their data is safe? You will be one step closer by signing up to tomorrow’s live webinar.

Learn how you can implement health data privacy controls to secure your healthcare data >> Register Here

For more on this topic register to hear David Lacey, former CISO, security expert, and who drafted original text behind ISO 27001, speak about implementing HIPAA and other healthcare security controls with a managed file transfer solution.


In my last post on the Ipswitch blog, I described how the Internet of Things (IoT) will change the nature of the IT team’s role and responsibilities. The primary purpose of initiating an IoT strategy is to capture data from a broader population of product endpoints. As a result, IoT deployments are also creating a new set of application performance management (APM) and infrastructure monitoring requirements.

New APM and Infrastructure Monitoring Requirements for IoT

Historically, traditional APM and infrastructure monitoring solutions were designed to track the behavior of a relatively static population of business applications and systems supporting universally recognized business processes.

Even this standard assortment of applications, servers and networks could be difficult to properly administer without the right kind of management tools. But, over time most IT organizations have gained a pretty good sense of how to handle these tasks. And determine if their applications and systems are behaving properly.

Now, the APM and infrastructure monitoring function is becoming more complicated in the rapidly expanding world of IoT.

In a typical IoT scenario, IT organization could be asked to monitor the performance of the software that captures data from a variety of “wearables”. And, these software-enabled devices might be embedded in various fitness, fashion or health-related products. Each of them pose differing demands to ensure their reliable application performance.

In another situation, sensors might be deployed on a fleet of vehicles and the data being retrieved could be used to alert the service desk if a truck is in distress, or it might be due for a tune-up, or simply needs to change its route to more cost-effectively reach its destination.

The Key to Successful IoT Deployments

Regardless of the specific use-case, the key to making an IoT deployment successful is properly monitoring the performance of the software that captures the sensor data. Not to mention the systems that interpret the meaning of that data and dictate the appropriate response via an application initiated command.

Therefore, an IoT deployment typically entails monitoring a wide array of inter-related applications that could impact a series of business processes.

For example, an alert regarding a truck experiencing a problem could trigger a request for replacement parts from an inventory management system. This can lead to the dispatch of a service truck guided by a logistics software system. It could also be recorded in a CRM, ERP or other enterprise app to ensure sales, finance and other departments are aware of the customer status. Ultimately, the information could be used to redesign the product and services to make them more reliable, improve customer satisfaction and increase corporate efficiency.

Monitoring these applications and the servers that support them to ensure they are operating at an optimal level across the IoT supply-chain is the new APM reality.

The IoT infrastructure is a lot more complicated than traditional application and server environments of the past. Given that, unified infrastructure monitoring solutions that provide end-to-end views of application delivery can provide significant management leverage.

Related article: The Internet of Things: A Real-World View

WhatsUp Gold
Click here for a free 30-day trial of WhatsUp Gold

Last week I got about halfway through writing my “deep dive” into what’s new in WhatsUp Gold version 16.4 and realized this was going to have to be a two-parter. So consider this post a “part 2 of 2” and enjoy the swim in the pool as you check out the new features and what they mean to you. Here’s a link to part 1 of this blog, in case you missed it.

SNMP Extended Monitor

SNMP  (simple network management protocol) is a fundamental part of any network monitoring product, and as you’d expect, WhatsUp Gold speaks SNMP fluently. We have active SNMP monitors, performance SNMP monitors, and Alert Center Threshold SNMP monitors.  But, keeping all your SNMP monitors straight can be a challenge for a network administrator.

To help with this, in WhatsUp Gold 16.4 we have added the SNMP Extended Monitor. This is a new active monitor that allows you to consolidate many SNMP monitors into one.  If, for example, you want to monitor 10 different SNMP OIDs (object identifiers) on a certain device, but don’t want to clutter the device with all these individual monitors, then simply add a single SNMP Extended Monitor and consolidate your OIDs there.  Within the single monitor, you get to set thresholds on each OID.  Tripping any of the thresholds will trigger whatever alerts you have setup for the device.  You can get the details of which OID triggered the alert via the State Change Log, or in an email alert.

Another great feature of the SNMP Extended Monitor is the ability to load and reuse the multi-OID configurations from a standard XML file. This allows you to re-use the OID definitions and their associated thresholds across many devices.

Application Performance Monitor

Application Performance Monitor is a powerful plugin for WhatsUp Gold. It allows you to systematically monitor servers on your network a higher up the stack, and look at critical statistics that relate directly to the performance of your running applications.  And, it comes with a bunch of pre-defined application profiles that let you get up and running quickly.  With the release of WhatsUp Gold 16.4 we have added some new monitoring profiles as we continue to add value to this product.   We’ve added profiles for Linux, Apache Web Servers, Windows DNS, SharePoint 2013, and Microsoft SQL named instances.

I’m particularly excited by the addition of Linux and Apache profiles. We already had a profile for MySQL, so now, we’ve pretty much got the LAMP stack covered.  As enterprises start to rollout Linux and other open source technologies, there’s no reason to change your monitoring environment.  Keep it all in the single pane of glass with WhatsUp Gold.

JMX Monitoring

Related to my excitement about monitoring the LAMP stack in our Application Performance Monitor, I’m also thrilled about our new ability to monitor JMX, or Java Management Extensions. JMX is a technology that is used in Java application servers, many of which are open source, like Apache Tomcat or Apache ActiveMQ.  JMX allows these application servers export various measurements and statistics related to the Java application.  Think of it like SNMP, but for Java apps.

In WhatsUp Gold 16.4, we’ve added the ability to create active JMX monitors, and performance JMX monitors, so you can get alerts when a monitor is out of threshold, as well as chart the performance over time. And, because navigating JMX can sometimes be difficult (just like SNMP), we’ve provided a JMX browser in the product, so that you can quickly figure out what measurements your app server is exporting (just like our SNMP browser).

These three new features, plus the ones I went over in last week’s post (aka part 1 of 2) make it plain that we continue to innovate and add customer value. Give 16.4 a try!

And for those of you who want a super deep dive, check out this video that provides an 11 minute technical overview of WhatsUp Gold 16.4.

IT team pressure

IT teams work valiantly behind the scenes every day to make sure their digital businesses stay connected. With challenges like dealing with cyber threats and new technology, or even just the sheer volume of day-to-day work, it is getting harder and harder for IT teams to keep necessary innovation from going off the rails. These threats to innovation are most glaring in small to mid-sized IT departments where personnel and budget resources tend to be more limited, and team members need to be both generalists and specialists. These are the true front lines of IT – where decisions need to be made quickly and business operations depend on systems functioning properly.

recent survey by Ipswitch polling 2,685 IT professionals around the world indicated that the top challenges holding IT teams back in 2016 fell into eight distinct categories, with network and application performance monitoring (19 per cent), new technology updates and deployments (14 per cent) and time, budget and resource constraints (10 per cent) among the top responses.

Improving network performance

Ensuring network performance is no easy feat. IT teams are tasked with keeping an organisation’s networks running efficiently and effectively around the clock and need to be concerned with all aspects of network infrastructure, including apps, servers and network connected devices.

Application performance is an important aspect because every company relies on an application on a network and an interruption in performance means a stop to business. Workforce fluidity further complicates network performance, as does the proliferation of devices logging on, whether the activity is sanctioned (work laptops, phones etc.) or surreptitious (many forms of wearable tech).

Many networks were simply not designed to cope with the demands being placed on them today by the increasing number of devices and applications. Furthermore, while balancing the needs of business-critical software and applications over an ever-growing number of connected devices is no easy task for anyone, the modern business world is an impatient place. Just a few instances of crashed websites, slow video payback or dropped calls could soon see customers looking elsewhere. They don’t care what’s causing the problems behind the scenes, all they care about is getting good service at the moment they choose to visit your website or watch your content. As a result, having the insight needed to spot issues before they occur and manage network bandwidth efficiently is an essential part of keeping any network up and running in the IoT (Internet of Thing) age.

The good news is that businesses often already have the monitoring tools they need to spot tell-tale signs of the network beginning to falter, they just aren’t using them to their full ability. These tools, when used well, provide a central, unified view across every aspect of networks, servers and applications, not only giving the IT team a high level of visibility, but also the ability to isolate root causes of complex issues quickly.

Efficient use of network monitoring tools can also allow the IT team to identify problems that only occur intermittently or at certain times by understanding key trends in network performance. This could be anything from daily spikes caused by employees all trying to remotely login at the start of the day, to monthly or annual trends only identified by monitoring activity over longer periods of time. Knowing what these trends are and when they will occur gives the team essential insight, allowing them to plan ahead and allocate bandwidth accordingly.

Evolving performance monitoring 

Infrastructure monitoring systems have evolved greatly over time, offering more automation and more ways to alert network administrators and IT managers to problems with the network. IT environments have become much more complex, resulting in a growing demand for comprehensive network, infrastructure and application monitoring tools. IT is constantly changing and evolving with organisations embracing cost-effective and consolidated IT management tools.

With that in mind, Ipswitch unveiled WhatsUp Gold 16.4, the newest version of its industry-leading unified infrastructure and application monitoring software. The new capabilities within WhatsUp Gold 16.4 help IT teams find and fix problems before the end users are affected, and are a direct result of integrating user feedback in order to provide a greater user experience. Efficient and effective network monitoring delivers greater visibility into network and application performance, quickly identifying issues to reduce troubleshooting time.

One thing is certain when it comes to network monitoring. The cost of implementing such a technology far outweighs the cost of not, especially once you start to add up the cost of any downtime, troubleshooting, performance and availability issues.

Related articles:

8 Issues Derailing IT Team Innovation in 2016