Ipswitch Blog

Where IT Pros Go to Grow

Our Latest Posts


Do you get bogged down trying to both maintain sufficient performance across your Microsoft applications, while troubleshooting related problems as they happen? If so, here are seven tips that will help you manage your software from Redmond:

1: Don’t Try to Manage the Unknown

Ensuring optimal Microsoft application performance starts by automatically maintaining an up-to-date network and server inventory of hardware and software assets, physical connectivity, and configuration. This helps to truly understand what is being supported in your environment. Doing this will also save time identifying relationships between devices and applications, and piecing them together to see the big picture. You may even find discrepancies in application versions or patch levels within Exchange or IIS server farms. You can correct these by through discovery, mapping and documenting your assets.

2: Monitor the Whole Delivery Chain

There are multiple elements responsible for providing Microsoft services and application content to end-users. Take monitoring Lync, for example. Lync alone has:

  • A multi-tier architecture consisting of a Front-End Server at the core
  • SQL Database servers on the back-end
  • Edge Server to enable outside the firewall access
  • Mediation Server for VoIP
  • And more..

You get the idea. The same applies to any Web-based application. Like SharePoint on the front-end, middleware systems and back-end SQL databases, not to mention the underling network. Don’t take any shortcuts, monitor it all.

If any of these components in the application delivery chain underperform, your Microsoft applications will inevitably slow down and bring employee communications, productivity and business operations down with it.

3: Understand Dependencies within Applications

There’s nothing worse than receiving an alert storm when a problem is detected. It can take hours to sort out what has a red status, why it has that status, and whether it was a real problem or a false positive. It’s a waste of time and delays the root cause identification and resolution.

A far better solution is to monitor the entire application service as a whole. This includes IIS servers, SQL servers, physical and virtual servers and the underlying network. Identify monitoring capabilities that will discover and track end-to-end dependencies and suppress alerts (if a database is “down,” all related apps will also be “down”). This is also the foundation to build SLA monitoring strategies aligned with business goals. Read on to find out more.

4: Look for Tools That Can Go Deep

Application performance monitoring tools let you drill down from one unified view into the offending component to reduce triage
and troubleshooting to just minutes. Even if you are not a DBA, you should be able to quickly identify that SQL is the culprit. Plus, think about automatic corrective actions as part of your monitoring strategyto restore service levels faster.  This includes using Write Event Log, Run Scripts, Reboot, Active and PowerShell scripts. For example, Exchange and SQL are well-known for their
high memory consumption and high IOs, so you may want to automatically reboot them to avoid service disruptions for your users when exceeded memory reaches a problematic level.

5: Utilize Microsoft Application Monitoring Features

Use built-in application monitoring features that come with your Microsoft applications like Exchange, SharePoint, Lync, IIS, Dynamics, SQL and Windows. Or even some free tools. Every organization is different, so there really is no one size fits all approach to this. Look for pre-packaged monitoring with capabilities to easily tweak settings, so you can also monitor custom applications or more feature-rich applications.

6: Don’t Forget Wireless Bandwidth Monitoring

It is a wireless world out there, and BYOD continues to grow. Mobility has transformed wireless networks into business-critical assets that support employee connectivity, productivity and business Ops. For example, Microsoft corporate headquarters runs Lync over Aruba Wi-Fi. Just like you want a map of your wired assets, look for capabilities to automatically generate dynamic wireless maps — WLCs, APs and Clients — from the same single point of control.

7: Keep Stakeholders and Teams Regularly Updated

Your Microsoft applications may be the backbone of your business. Slowdowns, intermittent application performance problems or failures will drive escalations through the roof. Not to mention bringing productivity, Ops and even revenue to a halt. Customizable reporting
(by application, by servers, by location, etc.) and automatic email distribution capabilities (daily, weekly, monthly, etc.) will help to keep cross-functional team members and stakeholders in the know. Get in the habit of periodically analyzing all performance data to identify problematic trends early on, properly plan capacity, and justify investment on additional resources.

Maintaining network performance can sometimes feel like a gargantuan task, with issues seemingly coming out of nowhere. However, many of these unforeseen problems can actually be anticipated and avoided with the correct monitoring solutions in place.

it-vendors-and-solutionsWhen transitioning to a new solution, do IT vendors elicit a mix of anticipation and fear? That makes sense. You’re eager to see the new service hard at work, but simultaneously concerned it won’t live up to promised hype or deliver on promises made by the supplier.

Also, these transitions tend to cost a lot of money and resources, so a failed transisition usually doesn’t bode well to the decision-maker. Although no transition is foolproof, it’s worth running down the following support checklist. Have you covered all your bases, or is there more work to do before you take the plunge?

Have you researched other vendors?

The act of bringing in a new tech vendor is a lot like hiring a new employee. If you haven’t spent the time “interviewing” prospective providers and vetting their resumes, take a step back and do some more research.

Do they eat their own dogfood?

If an APM vendor is trying to sell you their monitoring tool, but you notice that their competitor’s tool is open in the background on their computer during a demo, that probably doesn’t instill confidence. Does your potential vendor use its own product or eschew it in favor of other solutions? Since you’re likely making the switch to a new service or technology, your new vendor should be prepared to demonstrate the same confidence in that offering.

If they don’t use it, ask why. If they do, ask for proof.

Is it a closed environment?

Is the technology interoperable with other offerings or are you compelled to use only what the vendor is selling? More so, what’s the plan when you switch providers or if the vendor goes out of business? Bottom line: If they’re locking you in, get out. The last thing you want to have is a broken legacy tool without any support. Unfortunately, it happens all the time so make sure you have a an option to get out.

Is there data to support their cause?

If you’re looking to link up with a new vendor, ask how they track customer needs and serve up effective solutions. The answer should be a brand of data analytics. If it’s a generalized “mission statement” about customization or best practices, take a pass. Hard data is critical to handle customer needs effectively.

Does it meet your needs or is it hype?

Does the product you’re considering really meet your needs? It’s easy to get caught in the hype trap and spring for something you don’t really need. Maybe a Magic Quadrant report convinced your boss it was the hot new ticket and they couldn’t turn it down. Instead, look for key characteristics such as single-pane-of-glass monitoring across physical and virtual servers as well as applications.

How does their licensing work?

How is licensing handled? Per-seat is the old standby, but it often serves to line vendors’ pockets rather than offering you any significant benefit. Consider shopping for a provider that offers per-device licensing to help manage costs and simplify the process of getting up to speed. Too often do vendors provide overly complicated licensing. If you can’t grasp how their licensing and pricing works then assume they did that on purpose.

Are they really trying to help you?

Whose success is your prospective partner focused on? While all IT vendors are in the market to make a healthy profit, they should have teams, systems and processes in place designed to assess your needs, measure your satisfaction and take action where warranted. If you get a “cog in the machine” or “check in the bank” vibe from your vendor, back away and find another offering.

Is their support adequate?

Support isn’t a static discipline. If you’re considering an agreement with a new provider, what kind of training and education is available to sysadmins down the road? If your vendor doesn’t offer this or even see the need, you may want to opt out.

Break It Down

It’s easy to talk generally about cost; you want to spend “X” and not exceed “Y”. Here’s the thing: You need a more concrete answer. Start with a decent cost calculator and see what shakes out. Refine as needed to find a bottom line that suits your needs and your budget.

All companies eventually move up, laterally or simply into a need for action to keep up with IT trends. Do your workload a favor: Run this checklist first, adjust as needed and then dive into your new investment.

network protocols

It’s obviously easy to tell when two humans are communicating with one another. It’s not as easy for some folks to get how two machines communicate with each other. They do. It’s just a less obvious. Hint: they don’t Snapchat. Instead, components within your IT infrastructure, like routers or applications, use network protocols to chat with each other.

Network protocols get kind of important when it comes to their sharing information about your company. When machines don’t communicate with each other properly, vital information is lost.

Moreover, network protocols alert sysadmins to the status of IT health and performance. If you’re not paying attention to what your network protocols are trying to tell you, devices on your network could be failing and you don’t know about it.

In order to better understand the importance of network protocols, you should become familiar with the ones which are most commonly used.

SNMP (Simple Network Management Protocol)

IT pros use SNMP to collect information as well as to configure network devices such as servers, printers, hubs, switches, and routers on an IP network. How does it work? You install an SNMP agent on a device. The SNMP agent allows you to monitor that device from an SNMP management console. SNMP’s developers designed this protocol so it could be deployed on the largest number of devices and so it would have minimal impact on them. Also, they developed SNMP so that it would continue to work even when other network applications fail.

WMI (Windows Management Instrumentation)

WMI is the Microsoft implementation of Web-Based Enterprise Management, a software industry initiative to develop a standard for accessing management information in the enterprise. This protocol creates an operating system interface that receives information from devices running a WMI agent. WMI gathers details about the operating system, hardware or software data, the status and properties of remote or local systems, configuration and security information, and process and services information. It then passes all of these details along to the network management software, which monitors network health, performance, and availability. Although WMI is a proprietary protocol for Windows-based systems and applications, it can work with SNMP and other protocols.

SSH (Secure Shell)

SSH is a UNIX-based command interface that allows a user to gain remote access to a computer. Network administrators use SSH to control devices remotely. SSH creates a protective “shell” through encryption so that information can travel between network management software and devices. In addition to the security measure of encryption, SSH requires IT administrators to provide a username, password, and port number for authentication.


Telnet is one of the oldest communications protocols. Like SSH, it enables a user to control a device remotely. Unlike SSH, Telnet doesn’t use encryption. It’s been criticized for being less secure. In spite of that, people still use Telnet because there are some servers and network devices still require it.

Monitoring Your Infrastructure

Like almost every other IT team out there, yours probably is dealing with an infrastructure composed of a mish mash of servers, network equipment, mobile devices, and applications. Being able to automatically discover, manage and monitor this all requires unified infrastructure and application monitoring technology that uses all four of these protocols.




how it pros can save 30 minutes a day
Learn how to eliminate time wasters and get 30 minutes of your day back

Nobody knows the value of time better than an IT pro. Staying ahead of issues gives IT breathing room to enhance the network, instead of wasting time on fixing problems. 2016 is no different: Your IT team will need to once again deploy patches, install new hardware and transition to yet another upgraded Windows platform.

That’s right. The start of a new year always brings with it new challenges, but 2016 stands out as a year that could bring unforeseen complications following the release of Windows 10. Depending on your deployment plan, moving over to the latest incarnation of Windows is a massive additional project.

To make up for it, you and your team need to save 30 minutes a day this year. Our upcoming webinar on February 9th will hopefully help your team handle many of their core tasks quickly so they can concentrate on big things like the new Windows 10.

In our upcoming webinar, we’ll discuss how using WhatsUp Gold infrastructure monitoring software will enhance your team’s ability to:

  • Manage and track your entire inventory, down to the component level
  • Configure new or replaced devices
  • Create network diagrams and stay within any necessary compliance
  • Many other necessary and vital tasks that your team handles on a daily basis

Understanding how to save time on regular tasks represents a massive opportunity for time savings over the course of 2016.

Save Every Precious Second You’ve Got

WhatsUp Gold provides all of the visibility about your entire infrastructure that your team needs to reduce time spent on time-consuming tasks. IT administration is about managing a massive amount of tasks. Knowing this, we’ve designed software that can save every precious second you’ve got.

The webinar will show how WhatsUp Gold can become an IT pro’s best friend, including the ability to:

  • Create a single pane of glass to monitor the overall health of the entire technical infrastructure
  • Provide highly customizable alerts that allow for automated features to address certain tasks
  • Integrate with other WhatsUp Gold plug-ins to help create a specific solution for your IT administration
  • Increase the ease of device configuration, auditing and configuration management
  • Enhance the ability to comply with regulations and increase the ease of internal audits

Learn How to Avoid IT Time Wasters 

Efficiency is the name of the game in the world of IT. Our upcoming webinar on February 9 at 2pm US ET will provide actionable ways for IT pros to examine their workflows and save 30 minutes a day.

Ipswitch surveyed IT professionals across the globe and it turns out that data security and compliance are top challenges for IT teams in 2016.

How We Did It

Ipswitch polled 555 IT team members who work in companies across the globe with greater than 500 employees. We surveyed IT pros globally, partnering with Vanson Bourne in Europe, between October-November 2015 to learn about their File Transfer habits and goals.


255 in the US and 300 in Europe (100 each UK, France and Germany)

Totals by industry:

  • Banking/finance 15%
  • Government 15%
  • Healthcare 16%
  • Manufacturing 10%
  • Insurance 6%
  • Retail 6%
  • Other (includes Technology, Consulting, Utilities/Energy, Construction, & others) 32%

2016 State of Data Security and Compliance Infographic

Click on the infographic to see full size. 


Share this Image On Your Site

Ipswitch’s FTPS server gave the Broncos the defense they needed for protecting data in motion.

Data Security a Huge Issue for NFL Teams

After a season of highs and lows, the Denver Broncos are headed to Super Bowl 50 to face the Carolina Panthers. But teamwork, dedication and hard work aren’t the only things that contributed to the Broncos’ surge to the NFL’s championship game.

The amount of data generated by an NFL team is staggering. Besides statistics, plays, strategies and a crunch of information that would make some quarterbacks’ heads hurt, the business of running a professional sports team requires videos, photos and graphics to be distributed to special events, marketing and fans relations partners.

Because of email and private network restrictions, all of this data used to be downloaded to discs, thumb drives or hard drives. They would then be delivered by hand to players, coaches and other important members of the Broncos team.

WS_FTP is Broncos’ Choice for an FTPS Server

The franchise’s use of Ipswitch WS_FTP Server, a FTPS (file transfer protocol secure) server,  gave it a great defense for protecting data in motion. This data includes plays, high-definition videos, graphics and more to players, coaches and business partners. You could argue file transfer capabilities didn’t directly get the Broncos to the biggest game in football, but it certainly didn’t hurt.

But this process was time-consuming, inefficient and not to mention a huge data security risk. Ipswitch’s WS_FTP Server  came to the rescue the same way Brock Osweiler saved the day – or at least didn’t blow it – this past season when quarterback Peyton Manning missed some of the action with an injured foot.

Unlike Osweiler, who subbed for Manning only temporarily, WS_FTP Server was a permanent solution to the Broncos’ file transfer woes. WS_FTP Server is secure enough to keep confidential team information out of the wrong hands – some would unfairly imply out of the New England Patriots’ hands. It’s also powerful enough to handle the influx and growth of data, and gives ultimate visibility and control for top achievement.

Another key quality of WS_FTP Server is its uninterrupted service that increases uptime, availability and consistent performance with a failover configuration. Unlike the Microsoft Surface tablets that failed the Patriots during the recent AFC Conference Championship, WS_FTP Server won’t go down, or leave the Broncos’ files in limbo, unprotected and undelivered.

NFL Becoming a Technology-Driven Business

The NFL’s need for quality IT service goes beyond devices displaying plays and diagrams. File transfer played a role in the way football went from throwing a pig skin down a grassy field to being a technology-driven business.

By providing partners with just a username and password, transferring files is completed in just a few clicks. So before the Broncos head to Santa Clara for the big game, the team can rest easy knowing its files are secure and accessible by all players, coaches, team executives and business professionals keeping the team running smoothly.

Read the Ipswitch File Transfer Case Study: Denver Broncos

We’ll find out Sunday if the Broncos and Manning can beat the tough Panthers, if the commercials will make us laugh and if Beyoncé and Coldplay will dazzle with their halftime show. But one thing the Broncos and all Ipswitch customers will always be assured of is the success, security and compliance of WS_FTP Server file transfer solution.


Is There Such a Thing as too Much Visibility?
Sometimes broad visibility can make it hard to see

Every day, many of us commuters have visibility issues and are at the mercy of unpredictable traffic. Often I have to leave a LOT of buffer to get to work in case I had an important meeting. Luckily there are tools out there that can give me good traffic visibility at my fingertips. For instance, I rely almost entirely on Google Maps to “predict” how long is it going to take me to get to work or to any other place for that matter. This type of visibility is crucial when things go wrong, like an accident.  Google Maps would reroute me or at least give me a revised ETA so that I can make any adjustments.

Fix Before You Fail

Is there an analogy to this in the online world? You would think that service providers and large enterprises would have this level of visibility into their networks. So, when things go wrong, like a device failure, they can pin point the root cause right away and take corrective action. Better yet, they can be ahead of the game by watching any performance bottlenecks or warning signs of failures and fix the issues before the end users are affected.

BT Broadband Network Outage is a Lesson for SMBs

So, when the very large BT broadband network went down today, I wonder if there is such a thing as too much visibility. Despite the service provider level of visibility they have, it took BT almost two hours to get all their customers back on line. Now imagine if you were an SMB or a mid-sized organization faced with a similar outage. Without sufficient visibility into the problem, your network could be down for hours, costing you, your employees, and customers, significantly in terms of revenues, productivity, and reputation.

How can today’s SMBs get service provider level visibility that won’t break the bank?  Here are some pointers:

  • Invest in a network monitoring tool that can discover all of your critical infrastructure
  • Make sure that the tool can provide insight into availability, performance, and security of your infrastructure
  • Choose a tool that is broad enough to support multiple monitoring technologies (e.g. SNMP, WMI, network flows) your entire infrastructure (network devices, servers, wireless devices, applications, virtual machines, etc.)
  • Ensure that the tool can give you proactive insights as well as reactive alerts
  • Consider the total cost of ownership of the tool from when you deploy it to as you maintain it. Remember that DIY is not always free over the lifetime of owning the tool
  • Do not “under monitor” during the evaluation of the tool. Develop a monitoring configuration that reflects the entire production network, not just the subset suitable during the evaluation

mobile-device-hackerDid you know your mobile phone and wearables are just as appealing to hackers as your online bank account? No one is impervious to increasingly sophisticated mobile device hacking. Case in point, James Clapper, the U.S. director of national intelligence (DNI), had his phone hacked last month with calls rerouted to the Free Palestine Movement. And in October 2015, CIA director John Brennan’s mobile device fell victim to the activity of a group of “pot-smoking teenagers.” Bottom line? Not even next-gen hardware is completely safe.

So long as support enforces two-factor authentication and staff doesn’t access free Wi-Fi hotspots (especially when handling business data), a mobile phone should be safe, right? Nope. As noted by Dialed In and Wired, determined hackers do a lot more with your mobile and wearable technology than you may realize.

Mobile Phones: Hackers’ Best Friend

Any iPhone newer than the 4 comes with a high-quality accelerometer, or “tilt sensor.” If hackers access this sensor and you leave the phone on your desk, it is possible for them to both detect and analyze the vibration of your computer keyboard and determine what you’re typing, with 80 percent accuracy. So, say you type in the address of your favorite banking web portal and then your login credentials; hackers now have total access.

App developers have wised up to hackers targeting microphones and made it much more difficult to gain access without getting caught. Enterprising criminals, however, have discovered a way to tap a phone’s gyroscope. This lets the user play Angry Birds or any other orientation-based program and detect sound waves through it. So, next time you talk about finances with your significant other while three-starring a new level in your go-to mobile game, you may also be giving hackers the information they need to steal from you.

Targeting RFID Chips

In an effort to make retail purchases easier and more secure, many credit cards come equipped with RFID chips. Smartphones, meanwhile, include near-field communication (NFC) technology that allows them to transmit and receive that RFID data. The risk, here, is that hackers who manage to compromise your phone can leverage malware to read the information from a card’s RFID chip if you’re storing it in a nearby wallet or card-carrying mobile case. Then they can make a physical copy. You’re defrauded and don’t even know it.

“Say Cheese”

Mobile cameras have also come under scrutiny, since hacking this feature lets attackers take snaps of you or your family whenever and wherever they want. Despite improvements in basic phone security, though, it’s still possible for malicious users to take control of your camera. It goes like this: Operating systems like Android now mandate that a preview of any new photograph must be displayed on-screen, but don’t specify the size of this image. As a result, cybercriminals can take surreptitious photographs and then send them to anyone at any location.

MDM Leads to Risk

A large number of smartphones contain weak mobile device management (MDM) tools installed by carriers. And although reaching these tools in a target phone requires close proximity and the use of rogue base stations or femtocells, the risk is substantial. Attackers could take total control of your device.

Fit or Foul?

Mobile phones aren’t the only technology at risk; wearables are also open to attack. What can hackers do to these devices? Back in March 2015, wearable maker Fitbit was notified by researchers that their device could be hacked in fewer than 10 seconds. While initial reports focused on logical changes such as altering steps taken or distance walked, as noted by The Hacker News, it wasn’t long before hackers discovered a way to inject malware that potentially spreads to all synced devices.

Potentially Lethal Consequences

Security flaws in wireless-enabled pacemakers could allow hackers to take control of (and then stop) this critical device as well. In September 2015, a team from the University of Southern Alabama managed to access a functioning pacemaker and “kill” a medical mannequin attached to the device.

Medical devices such as insulin pumps and implantable defibrillators have notoriously weak security — a lack of encryption and weak or default passwords, in particular — of which cybercriminals can easily take control. The result? Delivering a fatal drug overdose or shocking perfectly healthy patients without warning.

Be Diligent About Mobile Security

The lion’s share of existing security issues stem from poor app development in mobile and wearable devices. Mobile device developers prioritize speed over security and eschew critical features such as encrypted commands, limited application sessions and disabling repeat requests. And while recognizing these flaws is the first step to improving mobile safety, users need to be aware of today’s risk factors. Right now, hackers can do far more with a mobile or wearable than the user may realize.

In the early years of IT, data was stored on paper tapes

What did an IT position look like in the ’70s, ’80s and ’90s? Far fewer mobile endpoints, for one thing. With respect to today, the history of information technology boasts some surprising differences in day-to-day tasks and the technology that was available. IT support has come a long way, folks.

How Far Back?

IT has been around almost as long as humans. If you think about it, hieroglyphics are just a script devs don’t use anymore. Mechanical devices such as the slide rule, the Difference Engine, Blaise Pascal’s Pascaline and other mechanical computers qualify as IT, too. But this particular journey begins well into the 20th century.

The 1970’s: Mainly Mainframes

Computers of this era were mostly mainframes and minicomputers, and a history of information technology wouldn’t be complete without mentioning them. IT job roles included manually running user batch tasks, performing printer backups, conducting system upgrades via lengthy procedures, keeping terminals stocked with paper and swapping out blown tubes. IT staff was relegated mainly to basements and other clean rooms that housed the big iron. System interconnectivity was minimal at the time, so people had to bridge those gaps themselves. This was the motivation behind the Internet (or the ARPANET, as it was known then).

The 1980’s: Say Hello to the PC

This decade saw the growth of the minicomputer (think DEC VAX computers) and the introduction of the PC. Sysadmins crawled out of the basement and into the hallways and computer rooms of schools, libraries and businesses that needed them onsite. The typical IT roles at this time consisted of installing and maintaining file and print servers to automate data storage, retrieval and printing. Other business roles included installing and upgrading DOS on PCs.

If you worked in a school, you saw the introduction of the Apple II, Commodore 64 and, eventually, the IBM PC. But the personal computer was more expensive, deemed for business use and not deployed in schools very much. It was the Apple II that propelled the education market forward and, if you worked support at a school in the ’80s, you knew all about floppy disks, daisy wheel printers and RS-232 cables.

The 1990’s: Cubicles, Windows and the Internet

This generation of IT worked in cubicles (think “Tron” or “Office Space“), often sharing that space alongside the users they supported. Most employees were using PCs with Windows by this time, and IT support was focused on networking, network maintenance, PC email support, Windows and Microsoft Office installations — and adding memory or graphics cards for those who needed them.

Toward the end of the decade, the Web’s contribution to Internet connectivity became arguably the most requested computing resource among growing businesses. Although there was no Facebook, Twitter or LinkedIn yet (Friendster would kick off that trend in 2002), employers still worried about productivity and often limited Web access. Oh, and if you could go ahead and add modems to PCs, run phone lines for those who needed dial-up access and Internet-enable the business LAN, that would be great.

Today’s IT: Welcome to Apple, Patch Tuesday and BYOD

Today, recent IT job roles have included the rebirth of Mac support, the introduction of social media (and the blocking of its access at work), constant security patches (Patch Tuesday on Windows, for instance), the advent of BYOD and DevOps automation.

The continued consumerization of IT (essentially now BYOD) meant that IT pros had “that kind” of job where friends and family would ask for help without pause. The one common thread through the years? The growth of automation in the IT role — something that will continue to define tomorrow’s helpdesk.

Image source: Wikimedia Commons

how it pros can navigate through a job interview
What can you do to make the IT job interview go well?

You’ve landed an IT job interview. That’s the good news. Now you have the interview itself, and let’s be honest, it’s never fun. Most candidates don’t like putting on a show of the software and protocols they’re familiar with. Even actors aren’t in love with auditioning. The “social” aspect of recruitment isn’t something you should need to ace for an admin position, but it has to be done.

If the job is a really good one — the technical work that’ll challenge your current support acumen (and compensate you well for the weekend maintenance) — you probably have a bit of an imposter complex even just applying. When the “ideal candidate” is an infosec wizard, how dare you present yourself? But hey, you believe you can do it, and the pay is great. So read that magazine and wait to be met.

Find Strengths in Technical Weaknesses

What can you do to make the IT job interview go well? Some things should be no-brainers, but there’s a reason think pieces keep pounding them into your head (present article excluded). Don’t be “creepy” with company research, advises InformationWeek, and don’t dress for the beach unless an offbeat SMB suggests otherwise. Do pay attention to the job description, though (don’t ask questions it already answered), and learn enough about the employer to imply a healthy interest.

Ultimately, play to your strengths. Lawyers have a saying: If the facts are against you, argue the law; if the law is against you, argue the facts. If you don’t have hands-on experience in data center migration, stress your credentials in bandwidth control during this process. Show that you know what’s involved in secure file transfers even if you haven’t managed them offsite. If your formal credentials are thin, play up your experience in the network trenches during the Super Bowl traffic spike.

Be Mindful of the Interviewers Who Don’t Work in IT

With luck, your interview with an IT rep will find some common ground. There may be scripts you’re both comfortable reading or security issues you should both be following. This will give you the chance to talk like a human as well as what the job will involve. One of the bigger challenges of an IT job interview, however, is that you may also meet someone from the business side. This guy knows only vaguely what network monitoring tools are and is probably a bit intimidated by the idea of bandwidth or network latency. In other words, they probably feel like the imposter, interviewing someone for a seat in ops they don’t fully understand.

But one thing you definitely don’t want to do is remind the interviewer of their own uncertainties. Talk confidently about the work, without going so deep into the technical weeds that the interviewer isn’t sure what you’re saying. Although this shorthand may demonstrate fluency in a multi-vendor environment, it can also suggest you can’t communicate well with the other departments.

You’re a Social Animal

For better or worse, a job interview is a social interaction. Some sysadmins and IT pros would gladly trade the spotlight for wrestling with a wonky script or normalizing office bandwidth.

Nonetheless, this can produce a disconnect. As one IT candidate reported by Dice.com said when asked to describe the ideal work environment, “I just want a job where I can go in a room, do my work and be left alone.”

That candidate probably speaks for many admins, developers, and other overworked helpdesks, but he didn’t get the job. Business people (including those who work for nonprofits and government) tend to celebrate charisma, and for good reason: The job is all about meeting client needs, which means talking to the customer to understand what they really want.

The good news? Your competition is other techies, probably just as geeky at heart.

The bottom line is that if you’re comfortable about your qualifications for the job — even if it is pushing your limits — that confidence will show through, and help you navigate the rocky spots. And who knows, you may be just who they’re looking for.

best practices network mapping

In this blog, part of our series on IT best practices, I’ll share how network mapping works and how it will give you a complete vantage point of your entire network.

Modern networks are full of connected devices, interdependent systems, virtual assets and mobile components. Monitoring each of these systems calls for technology that can discover and map everything on your network. Understanding and enacting the best practices of network mapping will guarantee successful network monitoring.

An Overview of Network Mapping

Most forms of network management software require what’s known as “seed scope,” which is a range of addresses defining the network – a network map. Network mapping begins by discovering devices using a number of protocols such as SNMP, SSH, Ping, Telnet and ARP to determine everything connected to the network.

Adequately mapping a large network requires being able to make use of both Layer 2 and Layer 3 protocols. Together, they combine to create a comprehensive view of your network.

The Two Types of Network Maps

When discussing network protocols, they are broken up into two categories, or layers:

  1. Layer 2: Defined as the “data link layer,” these protocols discover port-to-port connections and linking properties. Layer 2 protocols are largely proprietary, meaning the universal Link Level Discovery Protocol (LLDP) must be enabled for every network device.
  2. Layer 3:  Defined as the “network layer,” these protocols explore entire neighborhoods of devices by using SNMP-based technology to discover which devices interact with other devices.

Surprisingly, most IT infrastructure monitoring solutions rely solely on Layer 3 protocols. While this succeeds in creating a comprehensive overview of the network, successful network mapping practices call for using Layer 2 protocols as well. Layer 2 protocols provide the important information about port-to-port connectivity and connected devices that allow for faster troubleshooting when problems arise.

Conveniently enough, Ipswitch WhatsUp Gold uses Layer 2 discovery with ARP cache and the Ping Sweep method, combined with Layer 3 SNMP-enabled discovery methods to provide all the information needed to quickly identify and address problems.

Creating Network Diagrams

Network diagrams make use of the data generated by Layer 2 and Layer 3 protocols, and are super helpful for visualizing the entire network. One important best practice for network mapping is using network diagrams to ensure that the existing networks and IT processes are fully documented – and updated when new processes are added.

Microsoft Visio is the leading network diagramming software on the market. When data is imported, Visio allows for creation of robust, customizable diagrams and easy sharing of them between different companies. Yet, network managers who rely on Visio quickly discover that the lack of an auto-discovery feature severely limits its use.

Ipswitch WhatsConnected was created to solve this problem by auto-generating topology diagrams, which can be useful on their own or exported to Visio, Excel and other formats with a single click. WhatsConnected makes use of Layer 2 and Layer 3 protocols to provide Visio with everything in needs to generate the powerful diagrams its known for.

Instituting solutions that follow these suggestions should provide the foundation needed for real-time network monitoring. Coming up next in our best IT practices series, we’ll review network monitoring. Learning how to make the most of network discovery and network mapping will give your organization cutting-edge network monitoring capabilities.

Related articles:

Best Practices Series: Network Discovery

Best Practices Series: IT Asset Management

Football is no longer simply a game played on grass or turf — it's now awash in tech.

Things on the gridiron have changed. Once the province of paper-based play analysis, complicated hand signals and rules reliant on the eyes and ears of human refs, football is now awash in tech. Just take a look at the broken Surface tablets from last week’s AFC championship. With the Panthers and Broncos squaring up for Super Bowl 50 next week, here’s a look at the NFL technology (and IT teams behind it) that help elevate the sport while keeping its time-honored traditions intact.

It starts at Art McNally GameDay Central, located at NFL Headquarters in New York City. From here, Game Operations staff are tasked with prepping every communication and broadcast system before gametime while checking for radio frequency conflicts and handling failures prior to air. From a corporate standpoint, the GameDay crew is analogous to CIOs and their admin staff; they get the “big picture,” ensuring sysadmins on the ground have the information necessary to get their jobs done.

Clean Frequencies

Key to Game Ops is keeping radio frequencies clean. As the number of licensed bandwidths approved by the Federal Communications Commission (FCC) continues to grow, fewer clear channels exist for team officials and their support staff to use. With this in mind, operations must make sure both teams, their coaches and all TV network crews use the right bandwidth spectrum for headsets, microphones and any Wi-Fi connections to prevent accidental “jamming.” Jamming often leads to signal loss at a critical moment.

Operations staff are also responsible for ferreting out any “not-so-accidental” frequency interruptions; the New England Patriots’ “Headsetgate” comes to mind, especially since the team regularly shows potential as a Super Bowl contender. Did they really tamper with headsets? Maybe, maybe not — there have been a number of accusations over the past few years — but what matters for Super Bowl 50 is that Game Ops staff are up to the challenge of tracking down any technical issues regardless of origin or intent.

‘Instant’ Replay

Game Ops staff are also responsible for overseeing the use of NFL Instant Replay technology, which got its start in 1986, was removed in 1992 and then reimplemented in 1999. GameDay teams use the league’s proprietary NFL Vision software to analyze replays and communicate with both the stadium’s replay official and the referee before he goes under the hood — both of which shorten the length of a replay review. Think of it like analytics; the NFL is investing in software that can capture relevant data, serve it up to experts and empower users in (or on) the field.

On the Ground

Crews in the stadium during Super Bowl 50 are responsible for managing a few new pieces of hardware, including Microsoft Surface to analyze defensive and offensive formations. But because these tablets have no Internet access and their software cannot be altered, the league is currently testing a “video review” feature which may be implemented in future seasons.

Not everything works perfectly, though. As noted by Geek Wire, a problem during the December 8, 2015, matchup between Dallas and Washington forced these tablets out of service and left coaches with pen-and-paper diagrams. And on January 24, 2016, in the AFC Championship game, the Patriots suffered significant tablet malfunctions causing more than a few frustrations on the sidelines, especially since the Denver Broncos weren’t required to give up their still-working tablets under the NFL’s “equity rule”. February’s onsite IT will need to not only monitor the performance of the Sideline Viewing System, but its connection to their team’s tablet. System monitoring comes to mind here: Small precursor events in still-picture taking or tablet connections could act as warning signs for larger problems, if caught early enough.

Real-Time Stats

There’s also a need for data aggregation as the league moves toward full adoption of M2M systems like Zebra player tracking. Using RFID chips in each player’s shoulder pads, it is now possible to track their movements in-game in real time, then provide “next-generation stats” for fans. The larger value, however, comes in the form of actionable data obtained by recording and mining the information collected by these sensors. NFL technology professionals are tasked with not only ensuring this data stream is uninterrupted, but also making something useful out of the final product — a curated version of player data that trainers can use to improve Super Bowl performance.

Data Encryption

NFL teams need to transfer highly sensitive files containing details regarding trades, play books, and player contracts. In the past, the Denver Broncos used thumb drives and CDs to physically pass around large data files including that containing high-res video and image files. It was a manual and unstructured process that proved to be a time waster, lacking even basic security controls. Email was not an option because of the file size since most IT teams limit the size on email attachments.

In order to secure their data in motion and move it without hassle, regardless of the size, the Broncos picked Ipswitch WS_FTP software for secure data transfer internally between departments, and externally with partners.

A New Career?

Interested in working support for the NFL? It’s possible: While the Cleveland Browns are hiring an intern, the Washington Redskins need help at the helpdesk and the Seattle Seahawks are looking for a CRM data analyst. Interestingly, the job descriptions read like standard tech sector advertisements; NFL clubs have become enterprises in and of themselves, requiring multiple teams of backend IT personnel in addition to those on the ground during regular and postseason play.

Even the NFL is not all glitz and glory for IT. In fact, the league’s mandate is similar to most tech firms: Keep systems up and running while collecting and curating actionable data. Ultimately it’s a team effort — the work of many, not the power of one, moves the chains and snatches victory from the jaws of overtime.