5 tips for small IT teams

IT spending is on track to hit $3.9 billion worldwide by 2019, according to Gartner. For support, however, more technology spending doesn’t always translate to more full-time employees, and outsourcing doesn’t necessarily balance your workload. If you work on a small IT team no doubt you’re feeling the crunch and not enough manpower. Here are five tips to help growing IT teams make ends meet.

1. Job Number One

According to Sébastien Baehni, VP of Engineering at end-user analytics company Nexthink, one of the biggest challenges facing smaller helpdesks is prioritizing tasks to ensure other employees “can work and do their day job.” The problem with that is it’s easy to get lost in “urgent” requests from the executives or by ongoing technical issues that leave other “very important” tasks on the back burner.

In addition to affecting production and throughput, leaving these issues unresolved opens the door for security vulnerabilities. If possible, take a deep breath and ask for user feedback. This lets you tackle the “low-hanging fruit” issues which, although they may generate thousands of reports and complaints, aren’t at the top of your urgent list. Often, however, they’re easy to eliminate and can clear space for more critical line-of-business (LOB) tickets.

2. Mind the Gap

There are more than 210,000 unfilled cybersecurity jobs in the United States, as noted by Infosecurity Magazine, and upwards of a million worldwide. So even if the approved budget includes a new hire (or two), you may still be unable to find the right candidate to fill the position. And while it may seem counterintuitive given your existing workload, the simplest way to address security concerns and get your department back on track is sending at least a few staff members for up-to-date security training. The result? You fix security holes rather than patching them with duct tape and hoping for the best.

3. All for One

Baehni also offers advice for IT teams looking for the most effective way to handle diverse task lists with limited staff. In his experience, an “all-for-one” approach — wherein teams work together to solve emergent issues and employees actively identify problems and solutions on their own — produces better results than “silos” or compartmentalization . It makes sense: What happens if your ‘ideal’ network expert gets sick or moves to another company? By diversifying talent and hiring people with the ‘agility, curiosity and intellectual honesty required to identify issues,’ it is possible to build a team of self-improving experts who collectively handle critical support tasks.

4. Overtime Opt-Out

Overtime is often a bone of contention for sysadmins. According to Fortune, for example, tech giant Amazon has come under fire for expecting big overtime commitments from employees, in some cases giving the eCommerce retailer an air of “inhuman meritocracy”. Beyond loss of focus and potential burnout associated with mandatory overtime, however, there’s the larger problem of “making things work.” If support teams are constantly taking on overtime just to complete basic tasks, executives get the sense that the understaffed model is a success since nothing’s actively falling apart. Sometimes, a little pressure is a good thing.

5. Embrace the Shadows

What happens when IT can’t keep up? Shadow IT emerges. According to Windows IT Pro, in fact, a survey of CIOs revealed that companies are spending between 5 and 15 percent of their budget managing shadow IT — money that could be better spent taking the pressure off you. The simplest route between shadow and light? Embrace popular tools and processes where possible, rather than fighting the battle on principle. You’ll find happier users and fewer security holes to patch.

Want to take the pressure off your team? Find ways to target what matters, work smarter not harder and leverage the right tools for the job.

it-vendors-and-solutionsWhen transitioning to a new solution, do IT vendors elicit a mix of anticipation and fear? That makes sense. You’re eager to see the new service hard at work, but simultaneously concerned it won’t live up to promised hype or deliver on promises made by the supplier.

Also, these transitions tend to cost a lot of money and resources, so a failed transisition usually doesn’t bode well to the decision-maker. Although no transition is foolproof, it’s worth running down the following support checklist. Have you covered all your bases, or is there more work to do before you take the plunge?

Have you researched other vendors?

The act of bringing in a new tech vendor is a lot like hiring a new employee. If you haven’t spent the time “interviewing” prospective providers and vetting their resumes, take a step back and do some more research.

Do they eat their own dogfood?

If an APM vendor is trying to sell you their monitoring tool, but you notice that their competitor’s tool is open in the background on their computer during a demo, that probably doesn’t instill confidence. Does your potential vendor use its own product or eschew it in favor of other solutions? Since you’re likely making the switch to a new service or technology, your new vendor should be prepared to demonstrate the same confidence in that offering.

If they don’t use it, ask why. If they do, ask for proof.

Is it a closed environment?

Is the technology interoperable with other offerings or are you compelled to use only what the vendor is selling? More so, what’s the plan when you switch providers or if the vendor goes out of business? Bottom line: If they’re locking you in, get out. The last thing you want to have is a broken legacy tool without any support. Unfortunately, it happens all the time so make sure you have a an option to get out.

Is there data to support their cause?

If you’re looking to link up with a new vendor, ask how they track customer needs and serve up effective solutions. The answer should be a brand of data analytics. If it’s a generalized “mission statement” about customization or best practices, take a pass. Hard data is critical to handle customer needs effectively.

Does it meet your needs or is it hype?

Does the product you’re considering really meet your needs? It’s easy to get caught in the hype trap and spring for something you don’t really need. Maybe a Magic Quadrant report convinced your boss it was the hot new ticket and they couldn’t turn it down. Instead, look for key characteristics such as single-pane-of-glass monitoring across physical and virtual servers as well as applications.

How does their licensing work?

How is licensing handled? Per-seat is the old standby, but it often serves to line vendors’ pockets rather than offering you any significant benefit. Consider shopping for a provider that offers per-device licensing to help manage costs and simplify the process of getting up to speed. Too often do vendors provide overly complicated licensing. If you can’t grasp how their licensing and pricing works then assume they did that on purpose.

Are they really trying to help you?

Whose success is your prospective partner focused on? While all IT vendors are in the market to make a healthy profit, they should have teams, systems and processes in place designed to assess your needs, measure your satisfaction and take action where warranted. If you get a “cog in the machine” or “check in the bank” vibe from your vendor, back away and find another offering.

Is their support adequate?

Support isn’t a static discipline. If you’re considering an agreement with a new provider, what kind of training and education is available to sysadmins down the road? If your vendor doesn’t offer this or even see the need, you may want to opt out.

Break It Down

It’s easy to talk generally about cost; you want to spend “X” and not exceed “Y”. Here’s the thing: You need a more concrete answer. Start with a decent cost calculator and see what shakes out. Refine as needed to find a bottom line that suits your needs and your budget.

All companies eventually move up, laterally or simply into a need for action to keep up with IT trends. Do your workload a favor: Run this checklist first, adjust as needed and then dive into your new investment.

In the early years of IT, data was stored on paper tapes

What did an IT position look like in the ’70s, ’80s and ’90s? Far fewer mobile endpoints, for one thing. With respect to today, the history of information technology boasts some surprising differences in day-to-day tasks and the technology that was available. IT support has come a long way, folks.

How Far Back?

IT has been around almost as long as humans. If you think about it, hieroglyphics are just a script devs don’t use anymore. Mechanical devices such as the slide rule, the Difference Engine, Blaise Pascal’s Pascaline and other mechanical computers qualify as IT, too. But this particular journey begins well into the 20th century.

The 1970’s: Mainly Mainframes

Computers of this era were mostly mainframes and minicomputers, and a history of information technology wouldn’t be complete without mentioning them. IT job roles included manually running user batch tasks, performing printer backups, conducting system upgrades via lengthy procedures, keeping terminals stocked with paper and swapping out blown tubes. IT staff was relegated mainly to basements and other clean rooms that housed the big iron. System interconnectivity was minimal at the time, so people had to bridge those gaps themselves. This was the motivation behind the Internet (or the ARPANET, as it was known then).

The 1980’s: Say Hello to the PC

This decade saw the growth of the minicomputer (think DEC VAX computers) and the introduction of the PC. Sysadmins crawled out of the basement and into the hallways and computer rooms of schools, libraries and businesses that needed them onsite. The typical IT roles at this time consisted of installing and maintaining file and print servers to automate data storage, retrieval and printing. Other business roles included installing and upgrading DOS on PCs.

If you worked in a school, you saw the introduction of the Apple II, Commodore 64 and, eventually, the IBM PC. But the personal computer was more expensive, deemed for business use and not deployed in schools very much. It was the Apple II that propelled the education market forward and, if you worked support at a school in the ’80s, you knew all about floppy disks, daisy wheel printers and RS-232 cables.

The 1990’s: Cubicles, Windows and the Internet

This generation of IT worked in cubicles (think “Tron” or “Office Space“), often sharing that space alongside the users they supported. Most employees were using PCs with Windows by this time, and IT support was focused on networking, network maintenance, PC email support, Windows and Microsoft Office installations — and adding memory or graphics cards for those who needed them.

Toward the end of the decade, the Web’s contribution to Internet connectivity became arguably the most requested computing resource among growing businesses. Although there was no Facebook, Twitter or LinkedIn yet (Friendster would kick off that trend in 2002), employers still worried about productivity and often limited Web access. Oh, and if you could go ahead and add modems to PCs, run phone lines for those who needed dial-up access and Internet-enable the business LAN, that would be great.

Today’s IT: Welcome to Apple, Patch Tuesday and BYOD

Today, recent IT job roles have included the rebirth of Mac support, the introduction of social media (and the blocking of its access at work), constant security patches (Patch Tuesday on Windows, for instance), the advent of BYOD and DevOps automation.

The continued consumerization of IT (essentially now BYOD) meant that IT pros had “that kind” of job where friends and family would ask for help without pause. The one common thread through the years? The growth of automation in the IT role — something that will continue to define tomorrow’s helpdesk.

Image source: Wikimedia Commons

how it pros can navigate through a job interview
What can you do to make the IT job interview go well?

You’ve landed an IT job interview. That’s the good news. Now you have the interview itself, and let’s be honest, it’s never fun. Most candidates don’t like putting on a show of the software and protocols they’re familiar with. Even actors aren’t in love with auditioning. The “social” aspect of recruitment isn’t something you should need to ace for an admin position, but it has to be done.

If the job is a really good one — the technical work that’ll challenge your current support acumen (and compensate you well for the weekend maintenance) — you probably have a bit of an imposter complex even just applying. When the “ideal candidate” is an infosec wizard, how dare you present yourself? But hey, you believe you can do it, and the pay is great. So read that magazine and wait to be met.

Find Strengths in Technical Weaknesses

What can you do to make the IT job interview go well? Some things should be no-brainers, but there’s a reason think pieces keep pounding them into your head (present article excluded). Don’t be “creepy” with company research, advises InformationWeek, and don’t dress for the beach unless an offbeat SMB suggests otherwise. Do pay attention to the job description, though (don’t ask questions it already answered), and learn enough about the employer to imply a healthy interest.

Ultimately, play to your strengths. Lawyers have a saying: If the facts are against you, argue the law; if the law is against you, argue the facts. If you don’t have hands-on experience in data center migration, stress your credentials in bandwidth control during this process. Show that you know what’s involved in secure file transfers even if you haven’t managed them offsite. If your formal credentials are thin, play up your experience in the network trenches during the Super Bowl traffic spike.

Be Mindful of the Interviewers Who Don’t Work in IT

With luck, your interview with an IT rep will find some common ground. There may be scripts you’re both comfortable reading or security issues you should both be following. This will give you the chance to talk like a human as well as what the job will involve. One of the bigger challenges of an IT job interview, however, is that you may also meet someone from the business side. This guy knows only vaguely what network monitoring tools are and is probably a bit intimidated by the idea of bandwidth or network latency. In other words, they probably feel like the imposter, interviewing someone for a seat in ops they don’t fully understand.

But one thing you definitely don’t want to do is remind the interviewer of their own uncertainties. Talk confidently about the work, without going so deep into the technical weeds that the interviewer isn’t sure what you’re saying. Although this shorthand may demonstrate fluency in a multi-vendor environment, it can also suggest you can’t communicate well with the other departments.

You’re a Social Animal

For better or worse, a job interview is a social interaction. Some sysadmins and IT pros would gladly trade the spotlight for wrestling with a wonky script or normalizing office bandwidth.

Nonetheless, this can produce a disconnect. As one IT candidate reported by Dice.com said when asked to describe the ideal work environment, “I just want a job where I can go in a room, do my work and be left alone.”

That candidate probably speaks for many admins, developers, and other overworked helpdesks, but he didn’t get the job. Business people (including those who work for nonprofits and government) tend to celebrate charisma, and for good reason: The job is all about meeting client needs, which means talking to the customer to understand what they really want.

The good news? Your competition is other techies, probably just as geeky at heart.

The bottom line is that if you’re comfortable about your qualifications for the job — even if it is pushing your limits — that confidence will show through, and help you navigate the rocky spots. And who knows, you may be just who they’re looking for.

best practices network mapping

In this blog, part of our series on IT best practices, I’ll share how network mapping works and how it will give you a complete vantage point of your entire network.

Modern networks are full of connected devices, interdependent systems, virtual assets and mobile components. Monitoring each of these systems calls for technology that can discover and map everything on your network. Understanding and enacting the best practices of network mapping will guarantee successful network monitoring.

An Overview of Network Mapping

Most forms of network management software require what’s known as “seed scope,” which is a range of addresses defining the network – a network map. Network mapping begins by discovering devices using a number of protocols such as SNMP, SSH, Ping, Telnet and ARP to determine everything connected to the network.

Adequately mapping a large network requires being able to make use of both Layer 2 and Layer 3 protocols. Together, they combine to create a comprehensive view of your network.

The Two Types of Network Maps

When discussing network protocols, they are broken up into two categories, or layers:

  1. Layer 2: Defined as the “data link layer,” these protocols discover port-to-port connections and linking properties. Layer 2 protocols are largely proprietary, meaning the universal Link Level Discovery Protocol (LLDP) must be enabled for every network device.
  2. Layer 3:  Defined as the “network layer,” these protocols explore entire neighborhoods of devices by using SNMP-based technology to discover which devices interact with other devices.

Surprisingly, most IT infrastructure monitoring solutions rely solely on Layer 3 protocols. While this succeeds in creating a comprehensive overview of the network, successful network mapping practices call for using Layer 2 protocols as well. Layer 2 protocols provide the important information about port-to-port connectivity and connected devices that allow for faster troubleshooting when problems arise.

Conveniently enough, Ipswitch WhatsUp Gold uses Layer 2 discovery with ARP cache and the Ping Sweep method, combined with Layer 3 SNMP-enabled discovery methods to provide all the information needed to quickly identify and address problems.

Creating Network Diagrams

Network diagrams make use of the data generated by Layer 2 and Layer 3 protocols, and are super helpful for visualizing the entire network. One important best practice for network mapping is using network diagrams to ensure that the existing networks and IT processes are fully documented – and updated when new processes are added.

Microsoft Visio is the leading network diagramming software on the market. When data is imported, Visio allows for creation of robust, customizable diagrams and easy sharing of them between different companies. Yet, network managers who rely on Visio quickly discover that the lack of an auto-discovery feature severely limits its use.

Ipswitch WhatsConnected was created to solve this problem by auto-generating topology diagrams, which can be useful on their own or exported to Visio, Excel and other formats with a single click. WhatsConnected makes use of Layer 2 and Layer 3 protocols to provide Visio with everything in needs to generate the powerful diagrams its known for.

Instituting solutions that follow these suggestions should provide the foundation needed for real-time network monitoring. Coming up next in our best IT practices series, we’ll review network monitoring. Learning how to make the most of network discovery and network mapping will give your organization cutting-edge network monitoring capabilities.

Related articles:

Best Practices Series: Network Discovery

Best Practices Series: IT Asset Management

ipswitch community
Join the Ipswitch Community today!

When your network goes down or your computer isn’t operating as it should, sometimes the best thing to do is reboot. It’s often the first solution to troubleshooting poblems. We took this notion and applied it to our Ipswitch Community. This month, we relaunched and combined our Ipswitch communities into one.

As IT pros know, an online community is a powerful tool, allowing folks to connect, learn and share thoughts, problems and ideas. With this in mind, we wanted to create a community where our customers and other IT pros can come together to give feedback about our products and services, ask questions, relate their own findings and build a network of other users.

Uniting Product Resources on the Ipswitch Community

The Ipswitch Community has different spaces for different products, such as WhatsUp Gold and File Transfer, but unites all these resources in one place. The Community also is connected with the knowledge base, for self-help, and links to additional support resources. So no matter how a customer wants to solve an issue, the full arsenal of tools is available.

The new Community experience has been simplified to make it much easier to use and get to where you need to go. This easy-to-use community is meant to make it easier for existing members to interact and to attract new community users.

How to Get Involved and Join the Conversation

Join the Ipswitch COmmunity
Join the Ipswitch Community today!

Come visit today and get involved. Community moderators have even provided tips on how to ask effective questions to get the most out of the community. My “Getting started with the community” post gives you useful links and tips like; how to set up an account, update your profile, remember to read the Community charter and how to create better questions and ideas using detailed descriptions, brief language and images helps get to your point quicker and get more attention.

I think our Community charter sets a few reasonable guidelines. Requiring visitors to use real names and photos ensures they are interacting as people on the site. Constructive criticism is encouraged as it can establish a productive dialogue. And we do hope that all of our community members play nicely with others.

Beyond the basic facilities of forums, question asking and connection, active community members can get involved in feedback groups and beta testing, and talk with our product and UX teams. Community member involvement is a great way to hear from our customers and others while we strive to create great products and services.

Our Community is here for folks to learn together and provide an outlet for questions, concerns and insight. Join today to find out how you can get closer to each other, my colleagues and our products.



In my last post on the Ipswitch blog, I described how the Internet of Things (IoT) will change the nature of the IT team’s role and responsibilities. The primary purpose of initiating an IoT strategy is to capture data from a broader population of product endpoints. As a result, IoT deployments are also creating a new set of application performance management (APM) and infrastructure monitoring requirements.

New APM and Infrastructure Monitoring Requirements for IoT

Historically, traditional APM and infrastructure monitoring solutions were designed to track the behavior of a relatively static population of business applications and systems supporting universally recognized business processes.

Even this standard assortment of applications, servers and networks could be difficult to properly administer without the right kind of management tools. But, over time most IT organizations have gained a pretty good sense of how to handle these tasks. And determine if their applications and systems are behaving properly.

Now, the APM and infrastructure monitoring function is becoming more complicated in the rapidly expanding world of IoT.

In a typical IoT scenario, IT organization could be asked to monitor the performance of the software that captures data from a variety of “wearables”. And, these software-enabled devices might be embedded in various fitness, fashion or health-related products. Each of them pose differing demands to ensure their reliable application performance.

In another situation, sensors might be deployed on a fleet of vehicles and the data being retrieved could be used to alert the service desk if a truck is in distress, or it might be due for a tune-up, or simply needs to change its route to more cost-effectively reach its destination.

The Key to Successful IoT Deployments

Regardless of the specific use-case, the key to making an IoT deployment successful is properly monitoring the performance of the software that captures the sensor data. Not to mention the systems that interpret the meaning of that data and dictate the appropriate response via an application initiated command.

Therefore, an IoT deployment typically entails monitoring a wide array of inter-related applications that could impact a series of business processes.

For example, an alert regarding a truck experiencing a problem could trigger a request for replacement parts from an inventory management system. This can lead to the dispatch of a service truck guided by a logistics software system. It could also be recorded in a CRM, ERP or other enterprise app to ensure sales, finance and other departments are aware of the customer status. Ultimately, the information could be used to redesign the product and services to make them more reliable, improve customer satisfaction and increase corporate efficiency.

Monitoring these applications and the servers that support them to ensure they are operating at an optimal level across the IoT supply-chain is the new APM reality.

The IoT infrastructure is a lot more complicated than traditional application and server environments of the past. Given that, unified infrastructure monitoring solutions that provide end-to-end views of application delivery can provide significant management leverage.

Related article: The Internet of Things: A Real-World View

IT team pressure

IT teams work valiantly behind the scenes every day to make sure their digital businesses stay connected. With challenges like dealing with cyber threats and new technology, or even just the sheer volume of day-to-day work, it is getting harder and harder for IT teams to keep necessary innovation from going off the rails. These threats to innovation are most glaring in small to mid-sized IT departments where personnel and budget resources tend to be more limited, and team members need to be both generalists and specialists. These are the true front lines of IT – where decisions need to be made quickly and business operations depend on systems functioning properly.

recent survey by Ipswitch polling 2,685 IT professionals around the world indicated that the top challenges holding IT teams back in 2016 fell into eight distinct categories, with network and application performance monitoring (19 per cent), new technology updates and deployments (14 per cent) and time, budget and resource constraints (10 per cent) among the top responses.

Improving network performance

Ensuring network performance is no easy feat. IT teams are tasked with keeping an organisation’s networks running efficiently and effectively around the clock and need to be concerned with all aspects of network infrastructure, including apps, servers and network connected devices.

Application performance is an important aspect because every company relies on an application on a network and an interruption in performance means a stop to business. Workforce fluidity further complicates network performance, as does the proliferation of devices logging on, whether the activity is sanctioned (work laptops, phones etc.) or surreptitious (many forms of wearable tech).

Many networks were simply not designed to cope with the demands being placed on them today by the increasing number of devices and applications. Furthermore, while balancing the needs of business-critical software and applications over an ever-growing number of connected devices is no easy task for anyone, the modern business world is an impatient place. Just a few instances of crashed websites, slow video payback or dropped calls could soon see customers looking elsewhere. They don’t care what’s causing the problems behind the scenes, all they care about is getting good service at the moment they choose to visit your website or watch your content. As a result, having the insight needed to spot issues before they occur and manage network bandwidth efficiently is an essential part of keeping any network up and running in the IoT (Internet of Thing) age.

The good news is that businesses often already have the monitoring tools they need to spot tell-tale signs of the network beginning to falter, they just aren’t using them to their full ability. These tools, when used well, provide a central, unified view across every aspect of networks, servers and applications, not only giving the IT team a high level of visibility, but also the ability to isolate root causes of complex issues quickly.

Efficient use of network monitoring tools can also allow the IT team to identify problems that only occur intermittently or at certain times by understanding key trends in network performance. This could be anything from daily spikes caused by employees all trying to remotely login at the start of the day, to monthly or annual trends only identified by monitoring activity over longer periods of time. Knowing what these trends are and when they will occur gives the team essential insight, allowing them to plan ahead and allocate bandwidth accordingly.

Evolving performance monitoring 

Infrastructure monitoring systems have evolved greatly over time, offering more automation and more ways to alert network administrators and IT managers to problems with the network. IT environments have become much more complex, resulting in a growing demand for comprehensive network, infrastructure and application monitoring tools. IT is constantly changing and evolving with organisations embracing cost-effective and consolidated IT management tools.

With that in mind, Ipswitch unveiled WhatsUp Gold 16.4, the newest version of its industry-leading unified infrastructure and application monitoring software. The new capabilities within WhatsUp Gold 16.4 help IT teams find and fix problems before the end users are affected, and are a direct result of integrating user feedback in order to provide a greater user experience. Efficient and effective network monitoring delivers greater visibility into network and application performance, quickly identifying issues to reduce troubleshooting time.

One thing is certain when it comes to network monitoring. The cost of implementing such a technology far outweighs the cost of not, especially once you start to add up the cost of any downtime, troubleshooting, performance and availability issues.

Related articles:

8 Issues Derailing IT Team Innovation in 2016

internet of things

CES, the first big technology event of 2016, wrapped in Vegas last week and as expected, the Internet of Things (IoT) was a hot topic. If last year’s show was the one where everyone heard about the potential impact of disruptive technology, this year was certainly the year we saw the breadth and depth of the IoT. From the EHang personal minicopter to more fitness tracking devices than you could, erm well, shake a leg at, CES 2016 is abuzz with news of how technology is shrinking, rolling, flying and even becoming invisible.

With everything from ceiling fans to smart feeding bowls for pets  now connecting to the expanding Internet of Things, it’s time to ask how network and IT pros can cope with the escalating pressure on bandwidth and capacity.

Whether we like it or not, the world is becoming increasingly connected. As the online revolution infiltrates every aspect of our daily lives, the Internet of Things (IoT) has gone from an industry buzzword to a very real phenomenon affecting every one of us. This is reflected in predictions by Gartner, which estimates 25 billion connected ‘things’ will be in use globally by 2020. The rapid growth of the IoT is one of the key topics at this year’s CES. SAIC’s Doug Wagoner’s keynote speech focused on the how the combination of government and citizen use of the IoT could see up to double Gartner’s predicted figure of internet-connected items and hit 50 billion devices within the next five years.

It’s easy to see why. Just as sales of original IoT catalysts such as smartphones and tablets appear to be plateauing, emerging new tech categories including wearables, smart meters and eWallets are all picking up the baton. The highly anticipated Apple Watch sold 47.5 million units in the three months following its release. Health-tech wristbands, such as Fitbit, have also been very successful and were estimated to reach 36 million in 2015, double that of the previous year. Fitbit announced its latest product, the Fitbit Blaze smartwatch, at the show and is marketing it as a release which will ‘ignite the world of health and fitness in 2016’. Devices are becoming increasingly popular and mergers with fashion brands to produce fashionable and jewellery items are set to see their popularity continue to grow.

It doesn’t end there either. Industry 4.0 and the rise of the ultra efficient ‘Smart Factory’ looks set to change the face of manufacturing forever, using connected technology to cut waste, downtime and defects to almost zero. Meanwhile, growing corporate experimentation with drones and smart vehicles serves as a good indicator of what the future of business will look like for us all.

But away from all the excitement, there is a growing concern amongst IT teams about how existing corporate networks are expected to cope with the enormous amount of extra strain they will come under from these new connected devices. With many having only just found a way to cope with trends such as Bring Your Own Device (BYOD), will the IoT’s impact on business networks be the straw that finally breaks the proverbial camel’s back?

The answer is no, or at least it doesn’t have to be. With this in mind, I wanted to look at a couple of key areas most likely to be giving IT teams taking care of companies’ networks sleepless nights and how they can be addressed. If done effectively, not only can the current IoT storm be weathered, but businesses can begin building towards a brighter, more flexible future across their entire network.

1) Review infrastructure to get it ready for The Internet of Things

Many networks were simply not designed to cope with the demands being placed on them today by the increasing number of devices and applications. Furthermore, while balancing the needs of business-critical software and applications over an ever-growing number of connected devices is no easy task for anyone, the modern business world is an impatient place. Just a few instances of crashed websites, slow video playback or dropped calls could soon see customers looking elsewhere. They don’t care what’s causing the problems behind the scenes, all they care about is getting good service at the moment they choose to visit your website or watch your content. As a result, having the insight needed to spot issues before they occur and manage network bandwidth efficiently is an essential part of keeping any network up and running in the IoT age.

The good news is that most of businesses already have the monitoring tools they need to spot tell-tale signs of the network beginning to falter, they just aren’t using them to their full ability. These tools, when used well, provide a central, unified view across every aspect of networks, servers and applications, not only giving the IT team a high level of visibility, but also the ability to isolate root causes of complex issues quickly.

Efficient use of network or infrastructure monitoring tools can also allow the IT team to identify problems that only occur intermittently or at certain times by understanding key trends in network performance. This could be anything from daily spikes caused by employees all trying to remotely login at the start of the day, to monthly or annual trends only identified by monitoring activity over longer periods of time. Knowing what these trends are and when they will occur gives the team essential insight, allowing them to plan ahead and allocate bandwidth accordingly.

2) Benchmark for wireless access and network impact

 The vast majority of IoT devices connecting to the business network will be doing so wirelessly. With wireless access always at a premium across any network, it is critical to understand how a large number of additional devices connecting this way will impact on overall network performance. By developing a benchmark of which objects and devices are currently connecting, where from, and what they are accessing, businesses can get a much better picture of how the IoT will impact on their network bandwidth over time.

Key questions to ask when establishing network benchmarks are:

  • What are the most common objects and devices connecting? Are they primarily for business or personal use?
  • What are the top consumers of wireless bandwidth in terms of objects, devices and applications?
  • How are connected objects or devices moving through the corporate wireless network, and how does this impact access point availability and performance, even security?

By benchmarking effectively, businesses can identify any design changes needed to accommodate growing bandwidth demand and implement them early, before issues arise.

3) Review policies – Security and compliance

In addition to the bandwidth and wireless access issues discussed above, the proliferation of the IoT brings with it a potentially more troublesome issue for some; that of security and compliance. In heavily regulated industries such as financial, legal and healthcare, data privacy is of utmost importance, with punishments to match. And it is an ever-changing landscape. New EU data privacy laws that will affect any business that collects, processes, stores or shares personal data have recently been announced.

Indeed, businesses can face ruinous fines if found in breach of the rules relating to data protection. However, it can be extremely difficult to ensure compliance if there are any question marks over who or what has access to the network at any given point in time. Unfortunately, this is where I have to tell you there is no one-size-fits-all solution to the problem. As more and more Internet enabled devices begin to find their way onto the corporate network, businesses must sit down and formulate their own bespoke plans and policies for addressing the problem, based on their own specific business challenges. But taking the time to do this now, rather than later, will undoubtedly pay dividends in the not-too-distant future. When it comes to security and compliance, no business wants to be playing catch up.

The Internet of Things is undoubtedly an exciting phenomenon which marks yet another key landmark in the digitisation of the world as we know it. However, it also presents unique challenges to businesses and the networks they rely on. Addressing just a few of the key areas outlined above should help IT and network teams avoid potential disruption to their business (or worse) as a result of the IoT.


handle iso certificationThe International Organization for Standardization (ISO) is a non-governmental entity of 162 standardizing bodies from multiple industries. By creating sets of standards across different markets, it promotes quality, operational efficiency and customer satisfaction.

Businesses seek ISO certification to signal their commitment to excellence. As a midsized IT service team implementing ISO standards, you can reshape quality management, operations and even company culture.

Choosing the Right Certification

The first step is to decide which sets of standards apply to your area of specialization. Most sysadmins focus on three sets of standards: 20000, 22301 and 27001.

  • ISO 20000 helps organizations develop service-management standards. It standardizes how the helpdesk provides technical support to customers as well as how it assesses its service delivery.
  • ISO 22301 consists of business continuity standards designed to address how you’d handle significant external disruptions, like natural disasters or acts of terrorism. These standards are especially relevant for hospital databases, emergency services, transportation and financial institutions — anywhere big service interruptions could spell a catastrophe.
  • ISO 27001 standardizes infosec management within the organization both to reduce the likelihood of costly data breaches and to protect customers and intellectual property. In support of ISO 27001, ISO 27005 offers concrete guidelines for security risk management.

Decisions, Decisions

Deciding which ISO compliance challenge to tackle first depends on a few different things. If your helpdesk is already working within a framework like ITIL — with a customer-oriented, documented menu of services — ISO 20000 certification will be an easy win that can motivate the team to then tackle a bigger challenge, like security. If you’re particularly concerned about security and want to start there, try combining ISO 22301 and ISO 27001 under a risk-management umbrella. Set up a single risk assessment/risk treatment framework to address both standards at once.

Getting Started

ISO compliance is not about checking off boxes indicating you’ve reached a minimum standard. It’s about developing effective processes to improve performance. With ISO 22301 and 27001, you’ll document existing risks, evaluate them and decide whether to accept or reduce them. With ISO 20000, you’ll document current service offerings and helpdesk procedures like ticket management and identify ways to reduce time to resolution.


ISO compliance looks a little different to every organization, and IT finds its own balance between risk prevention and acceptance. For instance, if a given risk is low and fixing it would be inexpensive, accept the risk, document it and don’t throw money at preventing it. Whichever standard you start with, though, keep a few principles in mind:

  • Focus on your most critical business processes. Identify what your organization can least afford to lose — financial transactions processing, for example. On subsequent assessments, you can dig deeper into less crucial operations.
  • Identify which vulnerabilities endanger those processes. Without an effective ticketing hierarchy at the helpdesk, a sysadmin could wind up troubleshooting an employee’s flickering monitor while an entire building loses network connectivity.
  • Avoid assessing every process or asset at first. Instead of looking at all in-house IP addresses for ISO 27001, focus on the equipment supporting your most important functions. Again, you can dig deeper after standardizing the way you manage information.
  • Don’t chase irrelevant items. Lars Neupart, founder and CEO of Neupart Information Security Management, finds that ISO 27005 threat catalogs look like someone copied them from a whiteboard without bothering to organize them. Therefore, don’t assume every listed item applies to every situation. As Neupart puts it; “Not everything burns.”
  • Put findings in terms that management can understand. When you’re asking management to pay for implementing new helpdesk services or security solutions, keep your business assessments non-technical. Put information in numerical terms, such as estimating the hourly cost of downtime or the percent of decline in quarterly revenue after a data breach.

So, How Much Is This Going to Cost?

Bonnie del Conte is president of CONNSTEP, Inc., a Connecticut-based company that assists companies in implementing ISO across a range of industries. She says the biggest expenses related to ISO certification are payroll, the creation of assessment documentation and systems (e.g., documentation for periodic assessments, including both paper and software) and new-employee training programs. Firms like hers stipulate consulting fees in addition to the actual certification audit. At the same time, hiring a consultant can reduce the time intervals for standards implementation and audit completion — and prevent mistakes.

Why It’s Worth It

The ultimate goal of ISO certification is to generate measurable value and improvement within IT. It’s about how proactive, progressive awareness and problem-solving prevents disasters, improves service and makes operations more efficient. Its greatest intangible benefit, says del Conte, is often a better relationship between IT and management. “Companies find improved communication from management,” del Conte says, “due to more transparency about expectations and the role that everyone has in satisfying customer expectations.”

Don’t try to become the perfect IT service team or address every security vulnerability the first time around. Hit the most important points and then progressively look deeper with every assessment cycle. As your operations improve, so will IT’s culture and its relationship with the business side. If ISO certification helps you prove that IT is way more than a cost center, it’s worth the investment.

Old PcTechnology infrastructure has an expiration date. The problem? It’s not stamped on the side of the carton. Or available online. The life cycle of any server, networking device or associated hardware is determined by a combination of local and market factors: What’s the competition doing? How quickly is your business growing? Will C-suite executives approve any new spend?

Although there is no hard-and-fast rule for determining your due date, general guidelines exist. Here are some key strategies for your next infrastructure upgrade.

Decisions, Decisions

As noted by Forbes, companies have three basic choices when considering an improvement of their servers and networks: Upgrade specific components, spend for all-new hardware or consider moving a portion of their infrastructure to the cloud. But this is actually step two in the upgrade process. Step one is determining if your existing technology can hang on a little longer, or if a change needs to happen now.

How Did He Do That?

In some cases, your company can avoid spending money by deploying a few MacGyver-style tactics to keep infrastructure up and running — even when upgrades are warranted. Nevertheless, the IT team of Arthur Baxter, Network Operation Analyst of virtual private network service ExpressVPN, tends to avoid these kind of duct-tape-and-matchstick fixes because, according to Baxter, “they’re not very comprehensible to the next person that has to come along and totally replace what you’ve only barely taped together.” Better-than-average devs and admins all have their own set of tricks to keep infrastructure humming, but they’re typically called “best practices” and aren’t designed to push existing infrastructure past its limits. In other words, while sticking servers together with charisma and clever workarounds can extend hardware life, the results are unpredictable.

The Time Has Come

How do you know when it’s time for an upgrade? Company growth is a good indicator, and this could take the form of global expansion or an effort to make best use of big data. According to Baxter, however, advances in the industry may also force your hand: “If there’s something newer and better on the market, it’s [ideal] for an upgrade,” regardless of your infrastructure’s current performance. Budget limitations play a role, since it’s not always possible to commit the cash necessary for a better server or new network technology. He points out, though, that “top companies stay on the cutting edge of what’s available.” Delaying too long in an effort to extend the lifecycle of existing hardware could put you behind the curve.

Making the Case

Even when it’s time for an infrastructure upgrade, it’s a safe bet that supervisors and executives won’t hand out big-budget increases just because you ask nicely. It’s always a good idea to make your case using measurable improvements — such as increased network performance, storage capacity, agility and system resiliency — but it’s also worth exploring other ways to justify technology spending. “The best way,” argues Baxter, “is to find a consultant or join some vendor sessions.” If you have a large support budget, you can also request a vendor proposal. By getting these experts to advocate for their technology, and then backing up this marketing spin with your own analysis, it is possible to showcase the line-of-business benefits that come with your proposed strategy.

Cost and user experience are also excellent talking points, supported in a Huffington Post piece that discusses the need for upgrades to American election infrastructure. Not only can better technology save money — between $0.50 and $2.34 for every voter registered online — but the convenience of online and electronic voting platforms can increase voter turnout. So, for your upgrade proposal, consider showcasing how improved resiliency can reduce potential costs in the event of a data breach, or how greater agility can improve the end-user experience with better access to critical network functions.

Do you need an infrastructure upgrade? If you’re asking, your due date has arrived. And while MacGyver-ing your hardware into another business quarter is one way to prolong its life, you’re better off pitching supervisors and C-suite executives for the upgrade your competition may have already implemented.

Businessman Working At Desk With A Digital TabletKnowing which BYOD risks your fellow IT pros face is paramount in determining how to mitigate them. And the scope of BYOD’s influence on company data hasn’t stopped changing since your office first implemented a BYOD policy. What kinds of devices are users likely to bring to work with them? The range of devices encompasses more than just smartphones and tablets. Once these devices are identified, however, the risks they represent can help your team formulate a policy to keep resources safe when accessed from outside the network.

Workers Bring More than One Device to Work

Not long ago, information security only had to worry about employees bringing work home on company laptops and logging in remotely. Then smartphones hit the market, followed by tablets and phablets. On any given day you might see smartwatches, fitness trackers and even smart fobs try to access your network for control over a home automation or security system.

As an example of this proliferation, the U.S. Marine Corps recently partnered with three mobile carriers to provide a total of 21 iOS and Android smartphones to see if secure access to the Corps’ intranet can be delivered. Less than 1 percent of Marines use BlackBerry devices; the rest have moved to mostly Android or iOS. This is consistent with a recent Frost & Sullivan report, which suggests approximately 70 percent of U.S. organizations tolerate BYOD activity — a number that is expected to climb by almost 10 percent in a few years.

BYOD Risks Are Often More Subtle

Mobile devices aren’t usually designed with high security in mind, and concerns of cybercrime are often addressed quite slowly in OS or application updates. Smartphones, smartwatches and wearables may not have the ability to send and execute files remotely, but they may be able to gain access to company APIs and wreak havoc on your UX. This means their attacks may be harder to detect due to such a subtle interference.

One company recently flirted with bankruptcy because it lost a number of lucrative contracts due to overbidding. A malicious programmer, after planting malware in the company’s system, was able to manipulate internal APIs to change costing data, causing the sales team to produce inaccurate prices for their clients.

Watch for Lateral Movement

In a recent report titled “Defending Against the Digital Invasion,” Information Security magazine suggests mobile devices “can easily turn into a beachhead that an attacker can use to compromise your network. Proper onboarding, network segmentation and testing of these devices will be critical, but these processes have to be developed to scale.”

Chances are, malware will have already breached your perimeter security controls by the time it touches a personal device. In order to defend against this kind of intrusion, your controls need to be able to detect and monitor lateral movement. They should also be applied continuously to identify threats before they cause damage. In the first part of 2015, for instance, there were several thousand reports of malware targeting connected disk-storage devices — network surveillance camera storage devices among them — so that it may scan for these potential beachheads.

Mobile Devices Can Make DDoS Attacks Easier

Mobile device APIs don’t often include sufficient rate limits. They’re also quite easy to exploit for DDoS attacks. And because the requests generated in this type of attack originate from within the network, they are harder to detect and can quickly overwhelm and compromise a backend database. Future DDoS attackers may use mobile devices to enter specific application-layer resource bottlenecks. Already inside the network, they can then send fewer requests that are significantly more difficult to filter out than DDoS attacks that originate outside the network because they “fit in” with normal queries.

The Top 10 Hidden Network Costs of BYOD

As wireless becomes your primary user network, you need to deliver the availability and performance your users expect from the wired network. BYOD complicates this by increasing network density, bandwidth consumption and security risks. Download this Ipswitch white paper and and learn the top 10 hidden network costs of BYOD.

Related Articles:

Noble Truth #1: Networks Buckling Under BYOD and Bandwidth

College Networks Getting Schooled on BYOD