Ipswitch Blog

Where IT Pros Go to Grow

Our Latest Posts

best practices network mapping

In this blog, part of our series on IT best practices, I’ll share how network mapping works and how it will give you a complete vantage point of your entire network.

Modern networks are full of connected devices, interdependent systems, virtual assets and mobile components. Monitoring each of these systems calls for technology that can discover and map everything on your network. Understanding and enacting the best practices of network mapping will guarantee successful network monitoring.

An Overview of Network Mapping

Most forms of network management software require what’s known as “seed scope,” which is a range of addresses defining the network – a network map. Network mapping begins by discovering devices using a number of protocols such as SNMP, SSH, Ping, Telnet and ARP to determine everything connected to the network.

Adequately mapping a large network requires being able to make use of both Layer 2 and Layer 3 protocols. Together, they combine to create a comprehensive view of your network.

The Two Types of Network Maps

When discussing network protocols, they are broken up into two categories, or layers:

  1. Layer 2: Defined as the “data link layer,” these protocols discover port-to-port connections and linking properties. Layer 2 protocols are largely proprietary, meaning the universal Link Level Discovery Protocol (LLDP) must be enabled for every network device.
  2. Layer 3:  Defined as the “network layer,” these protocols explore entire neighborhoods of devices by using SNMP-based technology to discover which devices interact with other devices.

Surprisingly, most IT infrastructure monitoring solutions rely solely on Layer 3 protocols. While this succeeds in creating a comprehensive overview of the network, successful network mapping practices call for using Layer 2 protocols as well. Layer 2 protocols provide the important information about port-to-port connectivity and connected devices that allow for faster troubleshooting when problems arise.

Conveniently enough, Ipswitch WhatsUp Gold uses Layer 2 discovery with ARP cache and the Ping Sweep method, combined with Layer 3 SNMP-enabled discovery methods to provide all the information needed to quickly identify and address problems.

Creating Network Diagrams

Network diagrams make use of the data generated by Layer 2 and Layer 3 protocols, and are super helpful for visualizing the entire network. One important best practice for network mapping is using network diagrams to ensure that the existing networks and IT processes are fully documented – and updated when new processes are added.

Microsoft Visio is the leading network diagramming software on the market. When data is imported, Visio allows for creation of robust, customizable diagrams and easy sharing of them between different companies. Yet, network managers who rely on Visio quickly discover that the lack of an auto-discovery feature severely limits its use.

Ipswitch WhatsConnected was created to solve this problem by auto-generating topology diagrams, which can be useful on their own or exported to Visio, Excel and other formats with a single click. WhatsConnected makes use of Layer 2 and Layer 3 protocols to provide Visio with everything in needs to generate the powerful diagrams its known for.

Instituting solutions that follow these suggestions should provide the foundation needed for real-time network monitoring. Coming up next in our best IT practices series, we’ll review network monitoring. Learning how to make the most of network discovery and network mapping will give your organization cutting-edge network monitoring capabilities.

Related articles:

Best Practices Series: Network Discovery

Best Practices Series: IT Asset Management

Football is no longer simply a game played on grass or turf — it's now awash in tech.

Things on the gridiron have changed. Once the province of paper-based play analysis, complicated hand signals and rules reliant on the eyes and ears of human refs, football is now awash in tech. Just take a look at the broken Surface tablets from last week’s AFC championship. With the Panthers and Broncos squaring up for Super Bowl 50 next week, here’s a look at the NFL technology (and IT teams behind it) that help elevate the sport while keeping its time-honored traditions intact.

It starts at Art McNally GameDay Central, located at NFL Headquarters in New York City. From here, Game Operations staff are tasked with prepping every communication and broadcast system before gametime while checking for radio frequency conflicts and handling failures prior to air. From a corporate standpoint, the GameDay crew is analogous to CIOs and their admin staff; they get the “big picture,” ensuring sysadmins on the ground have the information necessary to get their jobs done.

Clean Frequencies

Key to Game Ops is keeping radio frequencies clean. As the number of licensed bandwidths approved by the Federal Communications Commission (FCC) continues to grow, fewer clear channels exist for team officials and their support staff to use. With this in mind, operations must make sure both teams, their coaches and all TV network crews use the right bandwidth spectrum for headsets, microphones and any Wi-Fi connections to prevent accidental “jamming.” Jamming often leads to signal loss at a critical moment.

Operations staff are also responsible for ferreting out any “not-so-accidental” frequency interruptions; the New England Patriots’ “Headsetgate” comes to mind, especially since the team regularly shows potential as a Super Bowl contender. Did they really tamper with headsets? Maybe, maybe not — there have been a number of accusations over the past few years — but what matters for Super Bowl 50 is that Game Ops staff are up to the challenge of tracking down any technical issues regardless of origin or intent.

‘Instant’ Replay

Game Ops staff are also responsible for overseeing the use of NFL Instant Replay technology, which got its start in 1986, was removed in 1992 and then reimplemented in 1999. GameDay teams use the league’s proprietary NFL Vision software to analyze replays and communicate with both the stadium’s replay official and the referee before he goes under the hood — both of which shorten the length of a replay review. Think of it like analytics; the NFL is investing in software that can capture relevant data, serve it up to experts and empower users in (or on) the field.

On the Ground

Crews in the stadium during Super Bowl 50 are responsible for managing a few new pieces of hardware, including Microsoft Surface to analyze defensive and offensive formations. But because these tablets have no Internet access and their software cannot be altered, the league is currently testing a “video review” feature which may be implemented in future seasons.

Not everything works perfectly, though. As noted by Geek Wire, a problem during the December 8, 2015, matchup between Dallas and Washington forced these tablets out of service and left coaches with pen-and-paper diagrams. And on January 24, 2016, in the AFC Championship game, the Patriots suffered significant tablet malfunctions causing more than a few frustrations on the sidelines, especially since the Denver Broncos weren’t required to give up their still-working tablets under the NFL’s “equity rule”. February’s onsite IT will need to not only monitor the performance of the Sideline Viewing System, but its connection to their team’s tablet. System monitoring comes to mind here: Small precursor events in still-picture taking or tablet connections could act as warning signs for larger problems, if caught early enough.

Real-Time Stats

There’s also a need for data aggregation as the league moves toward full adoption of M2M systems like Zebra player tracking. Using RFID chips in each player’s shoulder pads, it is now possible to track their movements in-game in real time, then provide “next-generation stats” for fans. The larger value, however, comes in the form of actionable data obtained by recording and mining the information collected by these sensors. NFL technology professionals are tasked with not only ensuring this data stream is uninterrupted, but also making something useful out of the final product — a curated version of player data that trainers can use to improve Super Bowl performance.

Data Encryption

NFL teams need to transfer highly sensitive files containing details regarding trades, play books, and player contracts. In the past, the Denver Broncos used thumb drives and CDs to physically pass around large data files including that containing high-res video and image files. It was a manual and unstructured process that proved to be a time waster, lacking even basic security controls. Email was not an option because of the file size since most IT teams limit the size on email attachments.

In order to secure their data in motion and move it without hassle, regardless of the size, the Broncos picked Ipswitch WS_FTP software for secure data transfer internally between departments, and externally with partners.

A New Career?

Interested in working support for the NFL? It’s possible: While the Cleveland Browns are hiring an intern, the Washington Redskins need help at the helpdesk and the Seattle Seahawks are looking for a CRM data analyst. Interestingly, the job descriptions read like standard tech sector advertisements; NFL clubs have become enterprises in and of themselves, requiring multiple teams of backend IT personnel in addition to those on the ground during regular and postseason play.

Even the NFL is not all glitz and glory for IT. In fact, the league’s mandate is similar to most tech firms: Keep systems up and running while collecting and curating actionable data. Ultimately it’s a team effort — the work of many, not the power of one, moves the chains and snatches victory from the jaws of overtime.

ipswitch community
Join the Ipswitch Community today!

When your network goes down or your computer isn’t operating as it should, sometimes the best thing to do is reboot. It’s often the first solution to troubleshooting poblems. We took this notion and applied it to our Ipswitch Community. This month, we relaunched and combined our Ipswitch communities into one.

As IT pros know, an online community is a powerful tool, allowing folks to connect, learn and share thoughts, problems and ideas. With this in mind, we wanted to create a community where our customers and other IT pros can come together to give feedback about our products and services, ask questions, relate their own findings and build a network of other users.

Uniting Product Resources on the Ipswitch Community

The Ipswitch Community has different spaces for different products, such as WhatsUp Gold and File Transfer, but unites all these resources in one place. The Community also is connected with the knowledge base, for self-help, and links to additional support resources. So no matter how a customer wants to solve an issue, the full arsenal of tools is available.

The new Community experience has been simplified to make it much easier to use and get to where you need to go. This easy-to-use community is meant to make it easier for existing members to interact and to attract new community users.

How to Get Involved and Join the Conversation

Join the Ipswitch COmmunity
Join the Ipswitch Community today!

Come visit today and get involved. Community moderators have even provided tips on how to ask effective questions to get the most out of the community. My “Getting started with the community” post gives you useful links and tips like; how to set up an account, update your profile, remember to read the Community charter and how to create better questions and ideas using detailed descriptions, brief language and images helps get to your point quicker and get more attention.

I think our Community charter sets a few reasonable guidelines. Requiring visitors to use real names and photos ensures they are interacting as people on the site. Constructive criticism is encouraged as it can establish a productive dialogue. And we do hope that all of our community members play nicely with others.

Beyond the basic facilities of forums, question asking and connection, active community members can get involved in feedback groups and beta testing, and talk with our product and UX teams. Community member involvement is a great way to hear from our customers and others while we strive to create great products and services.

Our Community is here for folks to learn together and provide an outlet for questions, concerns and insight. Join today to find out how you can get closer to each other, my colleagues and our products.

 

personal healthcare information

This Thursday, January 28th is Data Privacy Day (aka Data Protection Day in Europe).  The purpose of Data Privacy Day is to raise awareness and promote privacy and data protection best practices. To honor Data Privacy Day, here are some ways you can protect personal healthcare information (PHI) in-motion, an area of focus for healthcare IT teams handling PHI.

Personal Healthcare Info is a Hacker’s Dream

PHI is considered to be the most sought after data by cyber criminals in 2016. Hackers are moving away from other forms of cyber crime such as that which targets bank accounts. Instead they are focusing more on PHI due to the amount of data contained within it. Valuable data within PHI includes social security numbers, insurance policy info, credit card info, and more.

The lack of a consistent approach to data security throughout the healthcare industry also makes healthcare data easier to obtain. The easier it is to steal, the more lucrative the data becomes to hackers. The healthcare industry has had less time than others to adapt to growing security vulnerabilities, and online criminals don’t take long to take notice.

GDPR and the End of Safe Harbor

It’s not news that governments around the globe are doing their part to promote data privacy. They are doing this by legislating data protection of personal data, and reinforcing with significant penalties for non-compliance.  Check out the recent agreement on the European Data Protection Regulation as the most recent example.

What is changing, however, is the rapid growth in data integration across the open Internet between hospitals, service providers like payment processors, insurance companies, government agencies, cloud applications and health information exchanges.  The borderless enterprise is a fact of life.

Using Encryption to Meet Data Privacy Regulations

It’s well known that a security strategy focused on perimeter defense is not good enough. For one reason, healthcare data must move outside its trusted network.  Encryption is the best means to limit access to protected data, since only those with the encryption key can read it. But there are other factors to look at when considering technology to protect data in motion, particularly when compliance with HIPAA or other governmental data privacy regulations is an issue.

Briefly when evaluating cyphers for file encryption, described in FIPS 197, its important to consider key size, eg 128, 192 or 256 bit, which affects security.   It’s also worth considering products with FIPS 140-2 certified cyphers accredited for use by the US government as an added measure of confidence.

Here are several other things to consider to protect data in motion and ensure compliance:

  • End-to-end encryption: Encrypting files while in-transit and at rest protects data from access on trusted servers via malware or malicious agents with secure access to trusted network
  • Visibility for audit: Reports and dashboards to provide centralized access to all transfer activity across the organization can reduce audit time and improve compliance
  • Integration with organizational user directories: LDAP or SAML 2 integration to user directories or identity provider solutions not only improves access control and reduces administrative tasks, but can also provide single sign-on capability and multi-factor authentication
  • Integration with other IT controls: While data integration extends beyond perimeter defense systems, consider integrate with data scanning systems. Antivirus protects your network from malware from incoming files and Data Loss Prevention (DLP) stops protected data from leaving.
  • End-point access to data integration services: There are more constituents than ever that participate in data exchange. Each has unique needs and likely require one or more of the following services:
    • Secure file transfer from any device or platform
    • Access status of data movement to manage Service Level Agreements (SLAs)
    • Schedule or monitor pre-defined automated transfer activities
  • Access control: With the growing number of participants including those outside the company it’s more important then ever to carefully manage access with role-based security.  Ensuring each have appropriate access to the required data and services.
  • File transfer automation: Automation can eliminate misdirected transfers by employees and external access to the trusted network.  Using a file transfer automation tool can also can significantly reduce IT administration time and backlog for business integration process enhancement requests.

Become Privacy Safe Starting with This Webinar

Protecting PHI within the healthcare system doesn’t have to be painful for hospital administrators or doctors to appropriately access PHI, but it does mean having the right technology and good training in place. And in honor of Data Privacy Day, don’t you want to tell your customers that their data is safe? You will be one step closer by signing up to tomorrow’s live webinar.

Learn how you can implement health data privacy controls to secure your healthcare data >> Register Here

For more on this topic register to hear David Lacey, former CISO, security expert, and who drafted original text behind ISO 27001, speak about implementing HIPAA and other healthcare security controls with a managed file transfer solution.

IoT

In my last post on the Ipswitch blog, I described how the Internet of Things (IoT) will change the nature of the IT team’s role and responsibilities. The primary purpose of initiating an IoT strategy is to capture data from a broader population of product endpoints. As a result, IoT deployments are also creating a new set of application performance management (APM) and infrastructure monitoring requirements.

New APM and Infrastructure Monitoring Requirements for IoT

Historically, traditional APM and infrastructure monitoring solutions were designed to track the behavior of a relatively static population of business applications and systems supporting universally recognized business processes.

Even this standard assortment of applications, servers and networks could be difficult to properly administer without the right kind of management tools. But, over time most IT organizations have gained a pretty good sense of how to handle these tasks. And determine if their applications and systems are behaving properly.

Now, the APM and infrastructure monitoring function is becoming more complicated in the rapidly expanding world of IoT.

In a typical IoT scenario, IT organization could be asked to monitor the performance of the software that captures data from a variety of “wearables”. And, these software-enabled devices might be embedded in various fitness, fashion or health-related products. Each of them pose differing demands to ensure their reliable application performance.

In another situation, sensors might be deployed on a fleet of vehicles and the data being retrieved could be used to alert the service desk if a truck is in distress, or it might be due for a tune-up, or simply needs to change its route to more cost-effectively reach its destination.

The Key to Successful IoT Deployments

Regardless of the specific use-case, the key to making an IoT deployment successful is properly monitoring the performance of the software that captures the sensor data. Not to mention the systems that interpret the meaning of that data and dictate the appropriate response via an application initiated command.

Therefore, an IoT deployment typically entails monitoring a wide array of inter-related applications that could impact a series of business processes.

For example, an alert regarding a truck experiencing a problem could trigger a request for replacement parts from an inventory management system. This can lead to the dispatch of a service truck guided by a logistics software system. It could also be recorded in a CRM, ERP or other enterprise app to ensure sales, finance and other departments are aware of the customer status. Ultimately, the information could be used to redesign the product and services to make them more reliable, improve customer satisfaction and increase corporate efficiency.

Monitoring these applications and the servers that support them to ensure they are operating at an optimal level across the IoT supply-chain is the new APM reality.

The IoT infrastructure is a lot more complicated than traditional application and server environments of the past. Given that, unified infrastructure monitoring solutions that provide end-to-end views of application delivery can provide significant management leverage.

Related article: The Internet of Things: A Real-World View

WhatsUp Gold
Click here for a free 30-day trial of WhatsUp Gold

Last week I got about halfway through writing my “deep dive” into what’s new in WhatsUp Gold version 16.4 and realized this was going to have to be a two-parter. So consider this post a “part 2 of 2” and enjoy the swim in the pool as you check out the new features and what they mean to you. Here’s a link to part 1 of this blog, in case you missed it.

SNMP Extended Monitor

SNMP  (simple network management protocol) is a fundamental part of any network monitoring product, and as you’d expect, WhatsUp Gold speaks SNMP fluently. We have active SNMP monitors, performance SNMP monitors, and Alert Center Threshold SNMP monitors.  But, keeping all your SNMP monitors straight can be a challenge for a network administrator.

To help with this, in WhatsUp Gold 16.4 we have added the SNMP Extended Monitor. This is a new active monitor that allows you to consolidate many SNMP monitors into one.  If, for example, you want to monitor 10 different SNMP OIDs (object identifiers) on a certain device, but don’t want to clutter the device with all these individual monitors, then simply add a single SNMP Extended Monitor and consolidate your OIDs there.  Within the single monitor, you get to set thresholds on each OID.  Tripping any of the thresholds will trigger whatever alerts you have setup for the device.  You can get the details of which OID triggered the alert via the State Change Log, or in an email alert.

Another great feature of the SNMP Extended Monitor is the ability to load and reuse the multi-OID configurations from a standard XML file. This allows you to re-use the OID definitions and their associated thresholds across many devices.

Application Performance Monitor

Application Performance Monitor is a powerful plugin for WhatsUp Gold. It allows you to systematically monitor servers on your network a higher up the stack, and look at critical statistics that relate directly to the performance of your running applications.  And, it comes with a bunch of pre-defined application profiles that let you get up and running quickly.  With the release of WhatsUp Gold 16.4 we have added some new monitoring profiles as we continue to add value to this product.   We’ve added profiles for Linux, Apache Web Servers, Windows DNS, SharePoint 2013, and Microsoft SQL named instances.

I’m particularly excited by the addition of Linux and Apache profiles. We already had a profile for MySQL, so now, we’ve pretty much got the LAMP stack covered.  As enterprises start to rollout Linux and other open source technologies, there’s no reason to change your monitoring environment.  Keep it all in the single pane of glass with WhatsUp Gold.

JMX Monitoring

Related to my excitement about monitoring the LAMP stack in our Application Performance Monitor, I’m also thrilled about our new ability to monitor JMX, or Java Management Extensions. JMX is a technology that is used in Java application servers, many of which are open source, like Apache Tomcat or Apache ActiveMQ.  JMX allows these application servers export various measurements and statistics related to the Java application.  Think of it like SNMP, but for Java apps.

In WhatsUp Gold 16.4, we’ve added the ability to create active JMX monitors, and performance JMX monitors, so you can get alerts when a monitor is out of threshold, as well as chart the performance over time. And, because navigating JMX can sometimes be difficult (just like SNMP), we’ve provided a JMX browser in the product, so that you can quickly figure out what measurements your app server is exporting (just like our SNMP browser).

These three new features, plus the ones I went over in last week’s post (aka part 1 of 2) make it plain that we continue to innovate and add customer value. Give 16.4 a try!

And for those of you who want a super deep dive, check out this video that provides an 11 minute technical overview of WhatsUp Gold 16.4.

IT team pressure

IT teams work valiantly behind the scenes every day to make sure their digital businesses stay connected. With challenges like dealing with cyber threats and new technology, or even just the sheer volume of day-to-day work, it is getting harder and harder for IT teams to keep necessary innovation from going off the rails. These threats to innovation are most glaring in small to mid-sized IT departments where personnel and budget resources tend to be more limited, and team members need to be both generalists and specialists. These are the true front lines of IT – where decisions need to be made quickly and business operations depend on systems functioning properly.

recent survey by Ipswitch polling 2,685 IT professionals around the world indicated that the top challenges holding IT teams back in 2016 fell into eight distinct categories, with network and application performance monitoring (19 per cent), new technology updates and deployments (14 per cent) and time, budget and resource constraints (10 per cent) among the top responses.

Improving network performance

Ensuring network performance is no easy feat. IT teams are tasked with keeping an organisation’s networks running efficiently and effectively around the clock and need to be concerned with all aspects of network infrastructure, including apps, servers and network connected devices.

Application performance is an important aspect because every company relies on an application on a network and an interruption in performance means a stop to business. Workforce fluidity further complicates network performance, as does the proliferation of devices logging on, whether the activity is sanctioned (work laptops, phones etc.) or surreptitious (many forms of wearable tech).

Many networks were simply not designed to cope with the demands being placed on them today by the increasing number of devices and applications. Furthermore, while balancing the needs of business-critical software and applications over an ever-growing number of connected devices is no easy task for anyone, the modern business world is an impatient place. Just a few instances of crashed websites, slow video payback or dropped calls could soon see customers looking elsewhere. They don’t care what’s causing the problems behind the scenes, all they care about is getting good service at the moment they choose to visit your website or watch your content. As a result, having the insight needed to spot issues before they occur and manage network bandwidth efficiently is an essential part of keeping any network up and running in the IoT (Internet of Thing) age.

The good news is that businesses often already have the monitoring tools they need to spot tell-tale signs of the network beginning to falter, they just aren’t using them to their full ability. These tools, when used well, provide a central, unified view across every aspect of networks, servers and applications, not only giving the IT team a high level of visibility, but also the ability to isolate root causes of complex issues quickly.

Efficient use of network monitoring tools can also allow the IT team to identify problems that only occur intermittently or at certain times by understanding key trends in network performance. This could be anything from daily spikes caused by employees all trying to remotely login at the start of the day, to monthly or annual trends only identified by monitoring activity over longer periods of time. Knowing what these trends are and when they will occur gives the team essential insight, allowing them to plan ahead and allocate bandwidth accordingly.

Evolving performance monitoring 

Infrastructure monitoring systems have evolved greatly over time, offering more automation and more ways to alert network administrators and IT managers to problems with the network. IT environments have become much more complex, resulting in a growing demand for comprehensive network, infrastructure and application monitoring tools. IT is constantly changing and evolving with organisations embracing cost-effective and consolidated IT management tools.

With that in mind, Ipswitch unveiled WhatsUp Gold 16.4, the newest version of its industry-leading unified infrastructure and application monitoring software. The new capabilities within WhatsUp Gold 16.4 help IT teams find and fix problems before the end users are affected, and are a direct result of integrating user feedback in order to provide a greater user experience. Efficient and effective network monitoring delivers greater visibility into network and application performance, quickly identifying issues to reduce troubleshooting time.

One thing is certain when it comes to network monitoring. The cost of implementing such a technology far outweighs the cost of not, especially once you start to add up the cost of any downtime, troubleshooting, performance and availability issues.

Related articles:

8 Issues Derailing IT Team Innovation in 2016

As confirmed by PriceWaterhouseCoopers, attacks against small and midsized businesses (SMBs) between 2013 and 2014 increased by 64 percent. Why? Low price, high reward.

Attackers can break through millions of poorly defended SMBs through automation, gaining access to a treasure trove of data. Small-business vulnerability assessments can identify your weaknesses, but they take time away from daily operations. Is a security vulnerability assessment really worth the resources? These five questions will help you decide.

What Does It Entail?

A vulnerability assessment identifies precious assets as well as how attackers could steal them from you. Not surprisingly, 2014’s most common attack vectors were:

  • Software exploit (53 percent).
  • User interaction, such as opening a malicious email attachment or clicking through an unsafe URL (44 percent).
  • Web application vulnerability, like SQL injection, XSS or remote file inclusion (33 percent).
  • Use of stolen credentials (33 percent).
  • DDoS (10 percent).

It’s impossible to patch every vulnerability. “You can scan and patch 24/7, 365 days a year,” says Forrester security researcher Kelley Mak, “and still not take out a significant chunk.” The key is to identify vulnerabilities that will result in the most damage to your bottom line.

How Frequently Should We Assess?

Frequency depends on what kind of data you store and what kind of business you operate. If you can say yes to the following, you should assess more often:

  • You’ve never assessed security vulnerability before, or it’s been a while. In either case, establish a baseline with frequent assessments for a year or so. Then dial back the frequency.
  • You’re subject to regulatory compliance. If you’re just checking boxes, you’re only getting a limited security picture. Compliance is a baseline, not an effective defensive posture.
  • You’re a contractor for a government agency or valuable enterprise target. Cybercriminals love to use SMB vendors to break into higher-value targets. If one of your employees’ stolen authentication creds cost an enterprise millions of dollars, you’d kiss your contract goodbye.

Can Ops Do It?

Give another sysadmin the SANS 20 recommended list of security controls. If he can understand them, evaluate the business for them and remediate all associated issues, let them handle it.

Already too busy to take on the project? Bring in a specialist. Keep expenses down by getting an initial third-party assessment, drafting an action plan and joining the entire ops team in implementing it.

What Does a Top-Notch Third-Party Assessment Look Like?

Before you hire someone, ask them to explain how they conduct a security vulnerability assessment. According to Robbie Higgins, CISO of AbbVie and author for SearchMidmarketSecurity, their services should include:

  • Information and infrastructure evaluation. The consultant should look at your information systems, stored data, hardware and software. Critical systems like billing, HR, CRM, legal and IP repositories are vital, but you should also focus on minor systems accessible by your own vendors.
  • Current threat landscape. In addition to knowing today’s common exploits and malware trends, your consultant should tell you what types of data attackers are after as of late and what kinds of organizations they’re currently targeting.
  • Awareness of internal soft spots. Attacks don’t always happen because employees are disgruntled. Simple incorrect data entry can expose you to an SQL injection.
  • Estimated impact. Your vendor should explain the degree to which each security vulnerability would affect data integrity, confidentiality and availability of your network resources.
  • Risk assessment. A good vendor combines weaknesses, threat landscape and potential impact to extrapolate your risks in priority order.
  • An action plan. Again, save on security consultation by letting your team execute this roadmap.

Is It Worth It?

Assessments and remediation could cost you in short-term payroll or a third-party consultant’s fee. But if they prevent a data breach that could shut down your business, almost any price is worthwhile.

PrintIt’s a fact of the IT life that technology has a finite lifespan and it’s tough to manage change in technology. Procuring new software and hardware is only half the battle. The other half falls under what happens next and runs the gamut from integration to accessibility to security. This part gets tricky.

Need help? Here are 7 of the most common challenges you’ll face when you manage change during a technology transition, and how to deal with them.

1) Cultural Pushback

IT pros think about the nuts and bolts of new technology implementation from beginning to end, including how to manage . Front-line workers care how a new CRM or analytics tool is going to affect their daily job. IT teams need to communicate why a switchover is happening, the business benefits behind it, and what great things it means to the user. Your best bet is to get them prepared, over-communicate and stay on schedule. Make sure employees and executives alike have had every opportunity to learn what to expect when the transition goes live.

2) Handling Hype

When you manage change in technology you need to manage any hype attached to it. Look at artificial intelligence (AI) solutions. Given their cultural appeal, many users have extremely high expectations and are often disappointed at the end results.  And with respect to the current direction of AI development, according to Hackaday, it’s unlikely that devices will ever live up to expectations. Instead, a “new definition of intelligence” may be required.

In another example, consider the benefits and drawbacks of implementing a new OS such as Windows 10. Some users may want to upgrade to a new OS right away, but we know that an OS switch requires a plethora of testing, such as testing application compatibility and that some of the most important updates for a new OS take at least few months to release.

So what does this mean for IT pros during a tech transition? It means being clear about exactly what new tech will (and won’t) deliver, and communicating this to everyone.

3) Failure Can Happen

Things don’t always go as planned. In some cases new technology can actually make things worse. A recent article from The Independent notes that particulate filters introduced to curb NO2 emissions from vehicles actually had the opposite effect. The same goes for IT. If you are working on a new implementation that is unproven or risky, start small and consider it an A/B test outside the DMZ instead of a big bomb you have to somehow justify blowing up.

4) Risky ROI

While companies love to talk about ROI and technology going hand-in-hand, software-driven revenue is “mostly fiction,” according to Information Week. Bottom line? The more a solution costs to build or buy, the more you’ll need to invest in organizational redesign and retraining. In other words, technology does not operate in a vacuum.

5) Prepare for People

What happens when technology doesn’t work as intended? Employees and executives will come looking for answers. The fastest way to lose their confidence is by clamming up and refusing to talk about what happened or what’s coming next. It may not be worth breaking down the granular backend for them. Being prepared with a high-level explanation and potential timeline for restoration goes a long way toward instilling patience.

6) Lost in Translation

It’s easy for even simple messages to get garbled on their way up the management chain. Before, during and after the implementation of new technology, clarity is your watchword. Short, basic responses in everyday language to tech-oriented questions have the lowest chance of changing form from one ear to the next. You also don’t need to tell all the details. Just tell your users what they need to know. Providing too much information can be harmful and lead to confusion even if they think they understand.

7) It’s Not Fair

Guess what? Even when things are beyond your control, you’re still shouldering the blame. And because new technology implementation never goes exactly as planned, it’s good to have a backup plan. Say you’re rolling out IPv6 support for your website but things aren’t going well; you need an IPv4 reserve in your back pocket to ensure file transfers and page-load times don’t increase your bounce rate or tick off internal staff.

Unfortunately, “it’s not my fault” doesn’t apply in IT, as often as you feel you can say so. On the hook for managing change in technology? Chances are you’ll face at least one in this difficult dozen on the road to effective implementation.

WhatsUp Gold
Click here for a free 30-day trial of WhatsUp Gold

For most companies, a new year means a clean slate to renew goals and focus on success. Here at Ipswitch, we started the year releasing major improvements to WhatsUp Gold with version 16.4.

Diving Deep into WhatsUp Gold 16.4

In a previous blog post, my colleague Kevin Conklin outlined the general highlights of these updates. In this post, I will take a deeper dive into each improvement and its implications for monitoring your networks.

For those of you who want a super deep dive, check out this video that provides an 11 minute technical overview of WhatsUp Gold 16.4.

SSL Certificate Monitor

If you are responsible for web servers that use HTTPS, this monitor can save you serious embarrassment and potential loss of customers and revenue. If a certificate expires on your web server, your customers will be shown a scary expiration message instead of your web page.  While the message does allow your customers to get to your website via a special link, many customers will lose trust in your website and will simply abandon the page, and might never return.

To solve this nasty problem, the SSL Certificate Monitor will alert you a number of days before a certificate expires, based on a warning time frame that you select.   A common setting for this monitor is 30 or 60 days.  Thus, if an alert is triggered, you will have plenty of time to get a new certificate and load it before the current certificate expires.  Your customers will never know.

In addition to checking for certificate expiration, the monitor also tests to see if the DNS name of the web server matches the canonical name in the certificate. This is also a frequently encountered configuration error which can cause angst for your customers.  With this monitor, you can ensure that this configuration error never happens.

File Content Monitor

In WhatsUp Gold 16.4 we added a great new tool for the IT admin’s toolbox: the file content monitor. The monitor is deceptively simple.  It scans a text file or files of your choosing looking for a string, and then alerts if it finds the string.  This opens up WhatsUp Gold to monitoring lots of things that it couldn’t before, except through custom scripting.

A common use case is to monitor the logs of custom applications. Let’s say that a custom application puts the word ‘error’ into a log text file when some problem occurs.  Using this monitor, you can be alerted when this happens.  We’ve made sure that the monitor remembers where it was in the log file between polls, so it won’t alert again on the same error.  Or, you can have the monitor read the log file from the start on each poll, which handles other logging use cases, such as when a log file is re-written on a regular interval.  This is one of those monitors that can be used in all sorts of creative ways in diverse networks.

Flow Monitor

Flow Monitor is a great plugin for WhatsUp Gold. It gives IT admins a detailed view of their network like no other part of the WhatsUp Gold platform.  We’ve made a couple of key changes to Flow Monitor in this release.

First, on the Flow Sources page, we have added better sorting and filtering. You are now able to filter sources based on DNS names and IP addresses, or any part thereof.  If you want to see the sources that have interfaces in the 192.168.1.x network, no problem.  Just type that in ‘192.168.1’. In addition, we’ve added better sorting on the sources page.  Both these improvements have been asked for by our customers, especially for those dealing with a large number of flow sources.

In addition to these interface improvements, we’ve also added two new reports: Top Endpoints, and Top Endpoint Groups.  A common use for Flow Monitor is to show which devices on your network are sending or receiving the most data.  We have reports like Top Senders and Top Receivers for this.  But we’ve never had a report that showed the devices on your network based on total traffic, both sending and receiving.  That’s what the Top Endpoints report does.  In addition, like many of our other reports, we have a version of it meant for groups of IP addresses that you define, giving you a way to make your environment more understandable.  With these two reports, you can really get at your bandwidth hogs like never before.

What’s Next

These improvements will make using monitors in WhatsUp Gold easier and more user friendly. In 2015, we identified the places WhatsUp Gold could be stronger and more useful on a day-to-day basis. This work prepared us to launch these exciting upgrades in 2016 and start the year off right.  Look for my next blog post for another deep dive into more new features.

government-monitoringWeb security consists of multiple moving parts that can move in opposite directions. As a result, actions or technologies that improve one aspect of security may weaken another. Some enhancements might end up compromising your overall Web security.

An entanglement of just this sort builds even more complexity around the issue of government monitoring. Should Web traffic be limited in how much merits encryption? Should law enforcement have “back door” access to encrypted activity? More to the point, what are the security implications of these policies or standards with respect to your department?

This concern isn’t about government traffic monitoring in general, however strong (and mixed) many people’s feelings may be about the government monitoring personal content. Your questions relating to encryption are narrower and less ideological, in a sense, because they carry profound implications for your company’s Web security.

A Double-Edged Sword

Online encryption wars are not new; as Cat Zakrzewski reports at TechCrunch, the debate goes back two decades. With so many growing more concerned about Web security, though, the issue has new urgency. In a nutshell: It is widely agreed in cybersecurity that encryption — particularly end-to-end encryption — is one of the most powerful tools in your infosec toolbox. For thieves, stolen data is a worthless jumble if they can’t read it. That’s the point of encryption.

End-to-end encryption provides a layer of protection to data over its full journey, from sender to recipient. Wherever thieves may intercept it along the way, all they can steal is gibberish. Law enforcement’s concern about this depth of encryption, however, is that anyone can use it — from terrorists to common criminals, both of whom have particularly strong reason to avoid being overheard. Moreover, new categories of malware, such as ransomware, work by encrypting the victim’s data so the blackmailer can then demand assets before decrypting it to make it usable again.

For Whom the Key Works

This problem is difficult, but not unusual: If lockboxes are available, cybercriminals can use them to protect their own nefarious secrets. The effective legal response is to then require that all lawfully sold lockboxes come with a universal passkey available to the police, who can then open them. There’s your back-door access.

But that’s where things get complicated. If a universal passkey for back-door access exists, it could potentially fall into the hands of unauthorized users — who can use it to read any encrypted message they intercept. Your personal mail, your bank’s account records, whatever they get access to.

(The NSA and its affiliates abroad can build their own encryption engines without this vulnerability, but such high-powered technology isn’t cheap — beyond the means of most criminals, terrorists and the like, of course.)

More Keys, More Endpoints

A special passkey available to law enforcement would presumably be very closely held, and not the sort of thing bad actors are likely to get their hands on by compromising an FBI clerk’s computer. But the primary concern in cybersecurity is that the software mods needed to provide a back door would make encryption less robust. This means encryption will be less effective for all uses, even the most legitimate ones.

In essence, a lock that two different keys can open is inherently easier for a burglar to pick. According to Reuters, White House cybersecurity coordinator Michael Daniel acknowledged he knew no one in the security community who agreed with him that a back door wouldn’t compromise encryption.

Crucially, this problem is independent of any concern about the governmental misuse of back-door decryption technology. Even if no government agency ever used the back door to decrypt a message, its existence makes it possible for a third party to reverse-engineer the key, or exploit a subtle bug in the backdoor functionality — thus enabling them to read the once-encrypted messages.

Encryption isn’t an absolute security protection; nothing is. But it is one of the most powerful security tools available, and your team is rightfully concerned about the risks of compromising it.

employee-dataThe hotshot developer your company just lost to a competitor could also be your biggest security risk from employee data theft. You shouldn’t wait until he’s left carrying a 1TB flash drive full of trade secrets to worry about what else may have just walked out the door.

But suppose you need to clean up a mess, or prevent one from occurring after somebody moves on. What steps can you take?

From Irate to Exfiltrate

First, understand what you’re stepping into. Employee exfiltration is an underreported problem in network defense. Whether because a former staffer has become disaffected, angry or simply accepting of a better offer elsewhere, there are many ways for a motivated knowledge worker to remove important data. And an IT pro is a special category of knowledge worker for whom data exfiltration is the greatest risk.

Back in 2010, as reported by Network World, DARPA asked researchers to study the ways they could improve detection and defense against network insiders. That program, Cyber Insider Threat (CINDER), attempted to address employee data theft — within military or government facilities. Those DARPA contracts were awarded because insider threats were generally neglected, due in part to a dominant perimeter threat mentality.

Research was well underway when in 2013 Edward Snowden demonstrated the full potential for data exfiltration to any remaining disbelievers.

The takeaway for every system administrator and CSO: If you’re only focused on tweaking firewall settings, you may be at risk. Your company’s lost data probably won’t be published in The Guardian or the The New York Times, and you won’t be grilled on “60 Minutes.” But you’d be right to sweat it.

Post-Termination Steps

After a termination, there are many steps you could take. The proper course of action will depend upon the employee’s access to data, organizational role and, generally, a mature risk assessment framework. Here are a few to point you in the right direction:

  1. Today, many employees have company data on their mobile devices. Company-owned or company-managed phones may have remote wipe features, such as through Google Apps. Use these to purge sensitive data.
  2. Revocation of encrypted datasets is an approach that, according to TechTarget, allows you to revoke the ex-employee’s certificate.
  3. Study logs, using tools such as the Ipswitch Log Management Suite, enable you to identify potentially anomalous activity over an extended period of time. The theft may not be recent.
  4. Examination of Windows event logs can help identify whether the ex-employee attached USB devices to a company workstation.
  5. Catalog all applications accessed by the employee, both on-premises and cloud applications.
  6. Working with affected line-of-business managers, identify any sensitive datasets.
  7. If the ex-employee had root or sysadmin privileges, wholesale permission schemes and passwords may need to be updated, especially for off-premises resources.
  8. Ex-employee-managed workstations (and possibly server instances) should be quarantined for a period of time before returning them to the asset pool.
  9. For especially sensitive settings, heightened audit and log monitoring of coworkers for a limited period of time may be called for.
  10. For ex-employees who enjoyed privileged access to IT resources, tools such as Ipswitch WhatsConfigured can identify attempts to relay data to offsite servers or sabotage applications.
  11. Know your application risks. Web conferencing tools like WebEx and GoToMeeting, for example, provide the means to share data outside the corporate sandbox.

Match Points

As with other sysadmin duties, you’ll have to decide how much effort you should put into mitigating a potential data loss. Knowing which data has been lost and the potential business impact may be just as important as knowing which logs to examine. In the meantime, don’t overwhelm yourself with false alarms, and don’t underestimate your opponent. These steps can help you even after the employee has left. Best practices have it that you’ve done much more before the termination event.

You’ve probably ceded the first few moves to your opponent. A determined adversary’s next moves might well include tripwires, sniffers and other mischief — at which point you’re going to need even more tools to get things back to normal.