Ipswitch Blog

Where IT Pros Go to Grow

Our Latest Posts

internet of things

CES, the first big technology event of 2016, wrapped in Vegas last week and as expected, the Internet of Things (IoT) was a hot topic. If last year’s show was the one where everyone heard about the potential impact of disruptive technology, this year was certainly the year we saw the breadth and depth of the IoT. From the EHang personal minicopter to more fitness tracking devices than you could, erm well, shake a leg at, CES 2016 is abuzz with news of how technology is shrinking, rolling, flying and even becoming invisible.

With everything from ceiling fans to smart feeding bowls for pets  now connecting to the expanding Internet of Things, it’s time to ask how network and IT pros can cope with the escalating pressure on bandwidth and capacity.

Whether we like it or not, the world is becoming increasingly connected. As the online revolution infiltrates every aspect of our daily lives, the Internet of Things (IoT) has gone from an industry buzzword to a very real phenomenon affecting every one of us. This is reflected in predictions by Gartner, which estimates 25 billion connected ‘things’ will be in use globally by 2020. The rapid growth of the IoT is one of the key topics at this year’s CES. SAIC’s Doug Wagoner’s keynote speech focused on the how the combination of government and citizen use of the IoT could see up to double Gartner’s predicted figure of internet-connected items and hit 50 billion devices within the next five years.

It’s easy to see why. Just as sales of original IoT catalysts such as smartphones and tablets appear to be plateauing, emerging new tech categories including wearables, smart meters and eWallets are all picking up the baton. The highly anticipated Apple Watch sold 47.5 million units in the three months following its release. Health-tech wristbands, such as Fitbit, have also been very successful and were estimated to reach 36 million in 2015, double that of the previous year. Fitbit announced its latest product, the Fitbit Blaze smartwatch, at the show and is marketing it as a release which will ‘ignite the world of health and fitness in 2016’. Devices are becoming increasingly popular and mergers with fashion brands to produce fashionable and jewellery items are set to see their popularity continue to grow.

It doesn’t end there either. Industry 4.0 and the rise of the ultra efficient ‘Smart Factory’ looks set to change the face of manufacturing forever, using connected technology to cut waste, downtime and defects to almost zero. Meanwhile, growing corporate experimentation with drones and smart vehicles serves as a good indicator of what the future of business will look like for us all.

But away from all the excitement, there is a growing concern amongst IT teams about how existing corporate networks are expected to cope with the enormous amount of extra strain they will come under from these new connected devices. With many having only just found a way to cope with trends such as Bring Your Own Device (BYOD), will the IoT’s impact on business networks be the straw that finally breaks the proverbial camel’s back?

The answer is no, or at least it doesn’t have to be. With this in mind, I wanted to look at a couple of key areas most likely to be giving IT teams taking care of companies’ networks sleepless nights and how they can be addressed. If done effectively, not only can the current IoT storm be weathered, but businesses can begin building towards a brighter, more flexible future across their entire network.

1) Review infrastructure to get it ready for The Internet of Things

Many networks were simply not designed to cope with the demands being placed on them today by the increasing number of devices and applications. Furthermore, while balancing the needs of business-critical software and applications over an ever-growing number of connected devices is no easy task for anyone, the modern business world is an impatient place. Just a few instances of crashed websites, slow video playback or dropped calls could soon see customers looking elsewhere. They don’t care what’s causing the problems behind the scenes, all they care about is getting good service at the moment they choose to visit your website or watch your content. As a result, having the insight needed to spot issues before they occur and manage network bandwidth efficiently is an essential part of keeping any network up and running in the IoT age.

The good news is that most of businesses already have the monitoring tools they need to spot tell-tale signs of the network beginning to falter, they just aren’t using them to their full ability. These tools, when used well, provide a central, unified view across every aspect of networks, servers and applications, not only giving the IT team a high level of visibility, but also the ability to isolate root causes of complex issues quickly.

Efficient use of network or infrastructure monitoring tools can also allow the IT team to identify problems that only occur intermittently or at certain times by understanding key trends in network performance. This could be anything from daily spikes caused by employees all trying to remotely login at the start of the day, to monthly or annual trends only identified by monitoring activity over longer periods of time. Knowing what these trends are and when they will occur gives the team essential insight, allowing them to plan ahead and allocate bandwidth accordingly.

2) Benchmark for wireless access and network impact

 The vast majority of IoT devices connecting to the business network will be doing so wirelessly. With wireless access always at a premium across any network, it is critical to understand how a large number of additional devices connecting this way will impact on overall network performance. By developing a benchmark of which objects and devices are currently connecting, where from, and what they are accessing, businesses can get a much better picture of how the IoT will impact on their network bandwidth over time.

Key questions to ask when establishing network benchmarks are:

  • What are the most common objects and devices connecting? Are they primarily for business or personal use?
  • What are the top consumers of wireless bandwidth in terms of objects, devices and applications?
  • How are connected objects or devices moving through the corporate wireless network, and how does this impact access point availability and performance, even security?

By benchmarking effectively, businesses can identify any design changes needed to accommodate growing bandwidth demand and implement them early, before issues arise.

3) Review policies – Security and compliance

In addition to the bandwidth and wireless access issues discussed above, the proliferation of the IoT brings with it a potentially more troublesome issue for some; that of security and compliance. In heavily regulated industries such as financial, legal and healthcare, data privacy is of utmost importance, with punishments to match. And it is an ever-changing landscape. New EU data privacy laws that will affect any business that collects, processes, stores or shares personal data have recently been announced.

Indeed, businesses can face ruinous fines if found in breach of the rules relating to data protection. However, it can be extremely difficult to ensure compliance if there are any question marks over who or what has access to the network at any given point in time. Unfortunately, this is where I have to tell you there is no one-size-fits-all solution to the problem. As more and more Internet enabled devices begin to find their way onto the corporate network, businesses must sit down and formulate their own bespoke plans and policies for addressing the problem, based on their own specific business challenges. But taking the time to do this now, rather than later, will undoubtedly pay dividends in the not-too-distant future. When it comes to security and compliance, no business wants to be playing catch up.

The Internet of Things is undoubtedly an exciting phenomenon which marks yet another key landmark in the digitisation of the world as we know it. However, it also presents unique challenges to businesses and the networks they rely on. Addressing just a few of the key areas outlined above should help IT and network teams avoid potential disruption to their business (or worse) as a result of the IoT.



Spotify recommends the next album you should play. Twitter customizes moments and stories you should be reading. The flash sale site Gilt personalizes online shopping down to your favorite brands and discounts. The best experiences on the Web and mobile apps involve clean design, interesting details and intuitive interactions. When companies strive to offer this experience in their apps, they often turn to the LAMP stack for enablement.

The LAMP Stack

The acronym “LAMP” was aptly named for its four original open-source components including Linux, Apache, MySQL and PHP. Over the years the LAMP stack has evolved to include alternatives, while retaining its open-source roots. For example, the “P” can now also mean Perl or Python programing languages.

The open-source nature of each component in the LAMP stack has three distinct advantages for IT pros:

  • Each tool is free to use, saving money
  • Licenses have non-restricted parameters, expanding usage of each tool
  • Nothing is dependent on vendors to fix bugs, allowing you to address any issues personally

So what does that mean to infrastructure managers? Well, that great experience depends on the underlying infrastructure being up and running. So let’s break that down a little starting at the top of the LAMP stack.


Linux provides the OS layer of the stack. Here we need to monitor for memory bottlenecks, CPU load, storage or network issues that can affect the core performance of the entire stack.


Apache is one of the most popular web servers and offers a static web structure as the basis for your app. It is good practice to start monitoring at this layer for fundamental issues that can tank apps. You’ll want to watch for things like number of incoming requests, web server load and how much of the CPU is being used.


Today’s web apps track habits, remember login data and predict user behavior. This streamlines web browsing and communication. But to keep all this running as it should, the stack requires a database. The LAMP stack usually relies on MySQL server database to store this information but again substitutions are often made including PostgreSQL and even MongoDB.


PHP is a server-side scripting language designed specifically for web but also used as a programming language. It enables developers to add dynamic interactions with app and web users. As noted, Perl and Python can also be used as alternatives to PHP.

WhatsUp Gold Visibility Expands to Include Linux and Apache

Over the years, we’ve extended WhatsUp Gold beyond its origins in the network to include the physical and virtual servers, and core and custom apps your organization relies on to run the business. In WhatsUp Gold version 16.4 announced last week, that visibility expands to include Linux and Apache to give you the end-to-end infrastructure perspective you need to provide quick identification and remediation of developing issues.

The LAMP stack is important to businesses implementing applications that will improve the customer experience and drive profits. The business case for using LAMP relies heavily on lack of restrictions and ease of implementation. This is especially important for small and mid-sized IT teams where the focus has to be on top level improvements as opposed to base level functionality.

This is also a key reason why so many small IT teams depend on Ipswitch and our WhatsUp Gold product. It’s powerful, comprehensive, easy to use and customize to your needs and leads the industry in low cost of ownership. And it is precisely our respect for these small IT teams that drives us to develop things like support for Linux and Apache and integrated in your unified infrastructure view for free.

Oh, and by the way, we also added support for Java Management Extensions (JMX) so you can monitor Java apps as well, but that’s a story for another day.

Related article:

What’s New in WhatsUp Gold 16.4



handle iso certificationThe International Organization for Standardization (ISO) is a non-governmental entity of 162 standardizing bodies from multiple industries. By creating sets of standards across different markets, it promotes quality, operational efficiency and customer satisfaction.

Businesses seek ISO certification to signal their commitment to excellence. As a midsized IT service team implementing ISO standards, you can reshape quality management, operations and even company culture.

Choosing the Right Certification

The first step is to decide which sets of standards apply to your area of specialization. Most sysadmins focus on three sets of standards: 20000, 22301 and 27001.

  • ISO 20000 helps organizations develop service-management standards. It standardizes how the helpdesk provides technical support to customers as well as how it assesses its service delivery.
  • ISO 22301 consists of business continuity standards designed to address how you’d handle significant external disruptions, like natural disasters or acts of terrorism. These standards are especially relevant for hospital databases, emergency services, transportation and financial institutions — anywhere big service interruptions could spell a catastrophe.
  • ISO 27001 standardizes infosec management within the organization both to reduce the likelihood of costly data breaches and to protect customers and intellectual property. In support of ISO 27001, ISO 27005 offers concrete guidelines for security risk management.

Decisions, Decisions

Deciding which ISO compliance challenge to tackle first depends on a few different things. If your helpdesk is already working within a framework like ITIL — with a customer-oriented, documented menu of services — ISO 20000 certification will be an easy win that can motivate the team to then tackle a bigger challenge, like security. If you’re particularly concerned about security and want to start there, try combining ISO 22301 and ISO 27001 under a risk-management umbrella. Set up a single risk assessment/risk treatment framework to address both standards at once.

Getting Started

ISO compliance is not about checking off boxes indicating you’ve reached a minimum standard. It’s about developing effective processes to improve performance. With ISO 22301 and 27001, you’ll document existing risks, evaluate them and decide whether to accept or reduce them. With ISO 20000, you’ll document current service offerings and helpdesk procedures like ticket management and identify ways to reduce time to resolution.


ISO compliance looks a little different to every organization, and IT finds its own balance between risk prevention and acceptance. For instance, if a given risk is low and fixing it would be inexpensive, accept the risk, document it and don’t throw money at preventing it. Whichever standard you start with, though, keep a few principles in mind:

  • Focus on your most critical business processes. Identify what your organization can least afford to lose — financial transactions processing, for example. On subsequent assessments, you can dig deeper into less crucial operations.
  • Identify which vulnerabilities endanger those processes. Without an effective ticketing hierarchy at the helpdesk, a sysadmin could wind up troubleshooting an employee’s flickering monitor while an entire building loses network connectivity.
  • Avoid assessing every process or asset at first. Instead of looking at all in-house IP addresses for ISO 27001, focus on the equipment supporting your most important functions. Again, you can dig deeper after standardizing the way you manage information.
  • Don’t chase irrelevant items. Lars Neupart, founder and CEO of Neupart Information Security Management, finds that ISO 27005 threat catalogs look like someone copied them from a whiteboard without bothering to organize them. Therefore, don’t assume every listed item applies to every situation. As Neupart puts it; “Not everything burns.”
  • Put findings in terms that management can understand. When you’re asking management to pay for implementing new helpdesk services or security solutions, keep your business assessments non-technical. Put information in numerical terms, such as estimating the hourly cost of downtime or the percent of decline in quarterly revenue after a data breach.

So, How Much Is This Going to Cost?

Bonnie del Conte is president of CONNSTEP, Inc., a Connecticut-based company that assists companies in implementing ISO across a range of industries. She says the biggest expenses related to ISO certification are payroll, the creation of assessment documentation and systems (e.g., documentation for periodic assessments, including both paper and software) and new-employee training programs. Firms like hers stipulate consulting fees in addition to the actual certification audit. At the same time, hiring a consultant can reduce the time intervals for standards implementation and audit completion — and prevent mistakes.

Why It’s Worth It

The ultimate goal of ISO certification is to generate measurable value and improvement within IT. It’s about how proactive, progressive awareness and problem-solving prevents disasters, improves service and makes operations more efficient. Its greatest intangible benefit, says del Conte, is often a better relationship between IT and management. “Companies find improved communication from management,” del Conte says, “due to more transparency about expectations and the role that everyone has in satisfying customer expectations.”

Don’t try to become the perfect IT service team or address every security vulnerability the first time around. Hit the most important points and then progressively look deeper with every assessment cycle. As your operations improve, so will IT’s culture and its relationship with the business side. If ISO certification helps you prove that IT is way more than a cost center, it’s worth the investment.

complete-network-visibilityA decade ago, organizations expected a disconnect between IT and other relevant business units. After all, support was little more than a cost center, necessary to keep enterprises up and running but outside line-of-business (LoB) revenue streams. Movement in cloud computing, big data and, more recently, the burgeoning Internet of Things (IoT) have caused this trend to do a one-eighty.

IT is now a critical part of any boardroom discussion, with total network visibility playing a lead role in a company’s pursuit of a healthier bottom line. According to Frost & Sullivan, in fact, the network monitoring market should reach $4.4 billion in just two years, double the market revenue in 2012. Of course, talking about the benefits of a “single pane of glass” is one thing; IT pros need actionable scenarios to drive better budgets and improve productivity. Here’s a look at the 10 top cases for total network transparency.

1) Security

As noted by FedTech Magazine, the enemy of IT security is a lack of visibility. If you can’t view your network end-to-end, hackers or malware can slip through undetected. Once inside, this presence is given free rein until it brushes up against continuously monitored systems such as payment portals or HR databases. Complete visibility lets admins see security threats the moment they appear, and respond without delay.

2) Automation

Single-pane-of-glass visibility also lets IT pros automate specific tasks to improve overall performance. Consider eDiscovery or big data processing; while you can configure and perform these tasks manually, the IT desk’s time is often better spent forwarding strategic business objectives. Total network visibility allows you to easily determine which processes are a good fit for automation and which are best left in human hands.

3) Identification

According to a recent Clearswift report, 74 percent of all data breaches start from inside your organization. In some cases employees are simply misusing cloud services or access points, whereas in others, the objective is to defraud or defame. Either way, you need to know who’s doing what in your network, and why. Visibility into all systems — and who’s logging on — helps combat the risk of insider threats.

4) Prediction

You can’t always be in the office. What happens when you’re on the road or at home but the network still requires oversight? Many monitoring solutions now include mobile support, allowing you to log in from a smartphone or tablet to check on current conditions. This is especially useful if you’re out of town but receive warning about severe weather moving in. Total visibility gives you the lead time needed to prep servers and backup solutions to handle the storm.

5) Analytics

Effective data analysis can make or break your bottom line. As noted by RCR Wireless, real-time network visibility is crucial here. The goal is interoperability across systems and platforms to ensure data collection and processing happens quickly enough to provide actionable insight into the key needs of your network.

6) Budget

With a seat at the boardroom table, CISOs and CIOs must now justify IT budget requests as part of their business strategy at large. Using a single pane of glass lets you showcase exactly where investments are paying off — analysis tools or intrusion-detection solutions, for instance — and request commensurate funding to improve IT performance.

7) Proactive Response

It’s better to get ahead than fall behind, obviously, but think of it this way: Network visibility lets you see infrastructure problems in their infancy rather than only after they affect performance. Proactive data about app conflicts or bandwidth issues gives you the upper hand before congestion turns into a backlog of issue tickets.

8) Metrics

Chances are you’ll be called to the boardroom this year to demonstrate how your team is meeting business objectives. Complete visibility lets you collect and compile key metrics that clearly show things like improved uptime, amount of data backed up or new devices added to the network.

9) Training

According to Infosecurity Magazine, 72 percent of IT professionals believe their company isn’t doing enough to educate employees about IT security. With insider threats at an all-time high, network visibility is critical to pinpoint key vulnerabilities and design effective training plans for employees to reduce the chances of a data breach.

10) End-User Improvement

Technology doesn’t always work as intended. And in many cases, employees simply live with poor function — they grumble but don’t report network slowdown or crashing apps. Without this data, you can’t improve the system at large. With total network insight, however, you can discover end-user pain points and take corrective steps.

Seeing is believing. More importantly, seeing everything on your network is actionable, insightful and bolsters the bottom line.

advanced-persistant-threatsIt’s been a year since Sony Pictures employees logged into their workstations, expecting to start a normal workday, when they were greeted by soundbites of gunfire, images of skeletons and threats scrolling across their monitors. To date, the Sony Pictures attack is arguably the most vivid example of advanced persistent threats used to disable a commercial victim. A corporate giant was reduced to posting paper memos, sending faxes and paying over 7,000 employees with paper checks.

How Advanced Persistent Threats Work

Writing for the Wall Street Journal, security expert Bruce Schneier defines advanced persistent threats (APTs) as the most focus- and skill-oriented attacks on the Web. They target high-level individuals within an organization, or attack other companies that have access to their target.

After gaining login credentials, cybercriminals gain admin privileges, move data and employ sophisticated methods to evade detection. APTs can persist undetected in networks for months, even years.

What They Do

Most APTs are deployed by government agencies, organized factions of cybercrime or activist groups (often called “hacktivist” groups). According to Verizon’s most recent Data Breach Investigations Report, APTs primarily target three types of organizations: public agencies, technology/information companies and financial institutions.

Some APTs are designed to steal specific information, like a company’s intellectual property. Other APTs, such as the Stuxnet worm, are used to spy on or even attack another government. APTs like those launched by Sony’s attackers seek to embarrass one organization for a particular grievance. Hackers reportedly had a beef with Sony back in 2005, when the company implemented anti-piracy software into its CDs.

Peter Elkind, writing for Fortune, reported that attackers using advanced persistent threats managed to disable Sony Pictures by:

  • Erasing the storage data on 3,262 of 6,797 personal computers and nearly half of its network servers.
  • Writing over these computers’ data in seven different ways and deleting each machine’s startup software.
  • Releasing five Sony Pictures films, including four unreleased movies, to torrent sites for downloading.
  • Dumping 47,000 Social Security numbers, employee salary lists and a series of racist internal emails directed at President Obama.

Limiting Damage from APTs

Maintaining patches and upgrades, using an antivirus and enabling network perimeter detection are worthy defense strategies, but they rarely work against an intruder who’s in possession of high-level login credentials. With sufficient skills, resources and time, attackers can penetrate even the most well-fortified network. Organizations should start by using least-privilege security protocols and training critical employees to recognize and avoid spearphishing attacks.

While you’re at it, use network monitoring to detect APTs early, and watch for the telltale signs of an attack in progress. Some of these are as follows:

Late-Night Login Attempts

A high volume of login attempts occurring when no one’s at work is a simple but critical APT indicator. They may appear to come from legitimate employees, but they’re actually attackers — often in another timezone, according to InfoWorld — using hijacked credentials to access sensitive information at odd hours.

Backdoor Trojans

By dropping backdoor Trojan horse malware on multiple endpoint computers, attackers maintain access to the system even when they lose access in another area. Security personnel should never stop after finding a backdoor Trojan on one computer; there may be more still on the network.

Shadow Infrastructure

Attackers frequently set up an alternate infrastructure within the existing network to communicate with external command-and-control servers. Rogue agents have even been known to set up a series of spoof domains and subdomains based on old company names to appear legitimate. When people visit the real domain, the attackers’ C&C server would redirect them to fake URLs.

Outbound Data Abnormalities

InfoWorld also suggests looking for strange movements of outbound data, including those against computers within the company’s own network. Attackers love to build internal “way stations,” assemble gigabytes of data and compress the files before extracting them.

Threat intelligence consultants are always at your disposal, but they shouldn’t be the ones who wait for 15 minutes — surrounded by logged-in workstations — before a single human comes to greet them. To be prepared for a major attack, today’s IT departments should fortify security and network monitoring tools to detect APTs, and tell any contractors they work with to do the same.

scripting-nightmaresScripting is a popular and powerful choice to automate repeatable file transfer tasks. It can be horrifying, though, to discover just how many scripts are relied on for the successful operation of your infrastructure. This and the time they take to execute are sure to raise the ire of managers and executives. Here are some alternative file-based automation methods to reduce time and errors in common tasks that can benefit from file encryption, deployment redirection, scheduling and more.

DevOps and Automation

It used to be that deployment meant occasionally copying files to a set of physical servers that sat in a datacenter, maybe even on premises. Often this was done via FTP and terminal sessions directly with server names or addresses. If you were smart, you created scripts with the files, their locations and servers hardcoded into them.

Automated scripts are an excellent first step, but they have limitations. When a new server is added, for example, an old one is upgraded or replaced, or virtualization demands changing names and addresses. The result? Script failure. Also, changing OS platforms means a single set of scripts won’t work across all of your servers. Scripts can be error-prone, too, and slow down if they’re not compiled ahead of time.

With the emergence of agile and DevOps practices, there’s no time to manage these ever-changing environments, so the simplest route is to not do it at all. But because you still need to deploy software somewhere, API-based systems help you achieve the automation required without hardcoding the details. The end product is a much more efficient file transfer process.

SLAs: Performance and Security

File-based automation reduces the time you spend scripting. How? Encryption, or redirecting files upon their arrival to the right servers. The consistency and predictability of this process ensures you meet your service-level agreements (SLAs), which stipulate repeatability and the removal of errors. But, in order to achieve the performance metrics you need when working in an agile and nimble organization, you need more than that.

An enterprise-grade managed file transfer solution enables you to transfer files reliably, securely and quickly. Look for a solution that offers an event-driven workflow wherein processes are kicked off either according to a schedule or on-demand based on a trigger. Additionally, file transfer workloads need to happen in parallel, simultaneously deploying across your environments to limit the time it takes to deploy changes.

For peace of mind (yours, specifically), your file transfer solution needs to be hardened. Make sure it uses encryption for all file transfers and integrates with your enterprise identity management to control access to all of your environments. Ultimately, this helps you to conform to the requirements of the most regulated markets (health care and financial, for instance) — as well as local legislation. Your security control should be automated as well, through the use of policy enforcement with secure solutions for authentication (think RADIUS or LDAP).

Finally, you need to know the status of all transfer operations at a glance. With DevOps, constant process monitoring and measuring will lead to further improvements and the removal of bottlenecks. Ensure you have the proper level of reporting and visualization into your file transfers, including those completed, those that may have failed and those that are ongoing.

Moving Files To and From Anywhere

You may need to encrypt and move a file that was just extracted from a business system and is now sitting on a shared drive inside the trusted network. Maybe you need to move an encrypted file sitting on an FTP server in your business partners data center. You need the flexibility to encrypt, rename, process and transfer files from any server and deliver them where you need it.

Don’t Forget the Cloud

Whether you’re still working on-premises or you’ve already moved many of your systems to the cloud, your file transfer processes should work across both. The reality is that most organizations will continue to keep data living on both, likely settling on a hybrid on-premises and public-cloud mix for security and control purposes. Just as the cloud promises to transparently move user workloads across servers in both environments, your file transfer and deployment solution should do the same. In the end, good management will treat you like the hero you are.

Old PcTechnology infrastructure has an expiration date. The problem? It’s not stamped on the side of the carton. Or available online. The life cycle of any server, networking device or associated hardware is determined by a combination of local and market factors: What’s the competition doing? How quickly is your business growing? Will C-suite executives approve any new spend?

Although there is no hard-and-fast rule for determining your due date, general guidelines exist. Here are some key strategies for your next infrastructure upgrade.

Decisions, Decisions

As noted by Forbes, companies have three basic choices when considering an improvement of their servers and networks: Upgrade specific components, spend for all-new hardware or consider moving a portion of their infrastructure to the cloud. But this is actually step two in the upgrade process. Step one is determining if your existing technology can hang on a little longer, or if a change needs to happen now.

How Did He Do That?

In some cases, your company can avoid spending money by deploying a few MacGyver-style tactics to keep infrastructure up and running — even when upgrades are warranted. Nevertheless, the IT team of Arthur Baxter, Network Operation Analyst of virtual private network service ExpressVPN, tends to avoid these kind of duct-tape-and-matchstick fixes because, according to Baxter, “they’re not very comprehensible to the next person that has to come along and totally replace what you’ve only barely taped together.” Better-than-average devs and admins all have their own set of tricks to keep infrastructure humming, but they’re typically called “best practices” and aren’t designed to push existing infrastructure past its limits. In other words, while sticking servers together with charisma and clever workarounds can extend hardware life, the results are unpredictable.

The Time Has Come

How do you know when it’s time for an upgrade? Company growth is a good indicator, and this could take the form of global expansion or an effort to make best use of big data. According to Baxter, however, advances in the industry may also force your hand: “If there’s something newer and better on the market, it’s [ideal] for an upgrade,” regardless of your infrastructure’s current performance. Budget limitations play a role, since it’s not always possible to commit the cash necessary for a better server or new network technology. He points out, though, that “top companies stay on the cutting edge of what’s available.” Delaying too long in an effort to extend the lifecycle of existing hardware could put you behind the curve.

Making the Case

Even when it’s time for an infrastructure upgrade, it’s a safe bet that supervisors and executives won’t hand out big-budget increases just because you ask nicely. It’s always a good idea to make your case using measurable improvements — such as increased network performance, storage capacity, agility and system resiliency — but it’s also worth exploring other ways to justify technology spending. “The best way,” argues Baxter, “is to find a consultant or join some vendor sessions.” If you have a large support budget, you can also request a vendor proposal. By getting these experts to advocate for their technology, and then backing up this marketing spin with your own analysis, it is possible to showcase the line-of-business benefits that come with your proposed strategy.

Cost and user experience are also excellent talking points, supported in a Huffington Post piece that discusses the need for upgrades to American election infrastructure. Not only can better technology save money — between $0.50 and $2.34 for every voter registered online — but the convenience of online and electronic voting platforms can increase voter turnout. So, for your upgrade proposal, consider showcasing how improved resiliency can reduce potential costs in the event of a data breach, or how greater agility can improve the end-user experience with better access to critical network functions.

Do you need an infrastructure upgrade? If you’re asking, your due date has arrived. And while MacGyver-ing your hardware into another business quarter is one way to prolong its life, you’re better off pitching supervisors and C-suite executives for the upgrade your competition may have already implemented.

Managing Remote Employees

Just 24 percent of workers do their best work in the office during business hours, according to “The Geek Gap” co-author Minda Zetlin, writing for Inc.. In fact, telework is so appealing that nearly half of them would give up certain perks for a remote-work option, and 30 percent would take a pay cut.

Additional data from FlexJobs suggests managing remote employees can save businesses $11,000 annually for each person (you read that right), and that’s for everyone who works at least half-time from home. As if that weren’t enough, many telecommuters claim they’re more productive than their cubicle-inhabiting counterparts, and they’re also happier with their jobs.

For the IT department, managing remote employees poses two major challenges: secure connection and personal device usage. And when you’re offsite, success requires consistent communication and the clear definition of roles and responsibilities. IT departments that not only support, but empower remote work to this end become big contributors to the company’s bottom line.

Security Concerns of Managing Remote Employees

The biggest challenges when managing remote employees, according to Microsoft technical solutions pro Robert Kiilsgaard, isn’t training or application troubleshooting; they’re actually login issues and secure connectivity. “As much as 30 percent of help-desk volume is related to just resetting passwords,” he says. “This is a huge time sink for the help desk, and a complete loss in productivity for the remote associate.”

Specializing in enterprise architecture and IT transformation, Kiilsgaard recommends an Identity-as-a-Service (IaaS) solution, which allows you to manage granular access policies, provide single sign-on (SSO) functionality and facilitate self-service password reset. “If you provide a self-service portal for the end user, you have successfully eliminated that call volume. That doesn’t mean you’ve lowered your cost, but you have lowered your Level 1 service desk ticket queue workload and improved the customer-satisfaction part of your business.”

Accessing Applications

When managing remote employees, many organizations offer a patchwork of tools for application access, including virtual private networks (VPN), virtual desktops and third-party Software-as-a-Service (SaaS) sites. Barring current security concerns, Kiilsgaard also recommends offering a single portal to access all business applications.

If a single access point is of concern to you, there are also reputable applications that allow for the management of passwords via a single sign on. The user only needs to know one password and the application handles the rest. This is inherently safer since the user doesn’t need to know the passwords to any of the business applications/services. This also avoids the cost of deploying, monitoring and managing VPNs and tunneling technologies.

Employees aren’t always savvy about not using public Wi-Fi to access applications, though, leaving them even more vulnerable to man-in-the-middle attacks. They also have to be trained on the risks of using mobile devices to access applications. These include:

  • Enabling remote wipe for lost and stolen devices
  • The responsible use of company data, including storage on personal devices and transmission in email
  • Using only authorized applications when collaborating, sharing and performing work on sensitive data, so third parties don’t gain access to this content

What If He/She’s IT?

Jeremy Cucco, deputy CIO for the University of Puget Sound in Tacoma, Washington, says some of the best IT teams he’s managed during his career have either been from a remote location or those whose members performed remotely. Unfortunately, not every IT position is conducive to telecommuting, and it’s important to make sure these roles are managed with this in mind.

“Functional or business analysts often require face-to-face interaction, and server and LAN administrators may need to work locally on machines rather than remotely,” Cucco says. “Allowing software developers and systems administrators to work remotely has often involved either frank discussions with onsite personnel or a documented policy indicating which positions will and will not be allowed to telework.”

Today’s most in-demand employees — those at the support desk among them — want employers that offer a remote-work option. For this reason, employers who accommodate telework gain a significant competitive advantage. “Telework does require a level of personal maturity,” Cucco says. “However, denying that privilege to all based on the limitations of a few is not an acceptable answer in today’s workplace.”

Making It Happen

With smart access policies, ongoing training and clear communication, the IT department can make itself a powerful partner in managing remote employees, whether its members work in-house or develop solutions from an offsite location. It’s a contribution that increases productivity, unleashes innovation through collaboration and builds the workforce of tomorrow.

Businessman Working At Desk With A Digital TabletKnowing which BYOD risks your fellow IT pros face is paramount in determining how to mitigate them. And the scope of BYOD’s influence on company data hasn’t stopped changing since your office first implemented a BYOD policy. What kinds of devices are users likely to bring to work with them? The range of devices encompasses more than just smartphones and tablets. Once these devices are identified, however, the risks they represent can help your team formulate a policy to keep resources safe when accessed from outside the network.

Workers Bring More than One Device to Work

Not long ago, information security only had to worry about employees bringing work home on company laptops and logging in remotely. Then smartphones hit the market, followed by tablets and phablets. On any given day you might see smartwatches, fitness trackers and even smart fobs try to access your network for control over a home automation or security system.

As an example of this proliferation, the U.S. Marine Corps recently partnered with three mobile carriers to provide a total of 21 iOS and Android smartphones to see if secure access to the Corps’ intranet can be delivered. Less than 1 percent of Marines use BlackBerry devices; the rest have moved to mostly Android or iOS. This is consistent with a recent Frost & Sullivan report, which suggests approximately 70 percent of U.S. organizations tolerate BYOD activity — a number that is expected to climb by almost 10 percent in a few years.

BYOD Risks Are Often More Subtle

Mobile devices aren’t usually designed with high security in mind, and concerns of cybercrime are often addressed quite slowly in OS or application updates. Smartphones, smartwatches and wearables may not have the ability to send and execute files remotely, but they may be able to gain access to company APIs and wreak havoc on your UX. This means their attacks may be harder to detect due to such a subtle interference.

One company recently flirted with bankruptcy because it lost a number of lucrative contracts due to overbidding. A malicious programmer, after planting malware in the company’s system, was able to manipulate internal APIs to change costing data, causing the sales team to produce inaccurate prices for their clients.

Watch for Lateral Movement

In a recent report titled “Defending Against the Digital Invasion,” Information Security magazine suggests mobile devices “can easily turn into a beachhead that an attacker can use to compromise your network. Proper onboarding, network segmentation and testing of these devices will be critical, but these processes have to be developed to scale.”

Chances are, malware will have already breached your perimeter security controls by the time it touches a personal device. In order to defend against this kind of intrusion, your controls need to be able to detect and monitor lateral movement. They should also be applied continuously to identify threats before they cause damage. In the first part of 2015, for instance, there were several thousand reports of malware targeting connected disk-storage devices — network surveillance camera storage devices among them — so that it may scan for these potential beachheads.

Mobile Devices Can Make DDoS Attacks Easier

Mobile device APIs don’t often include sufficient rate limits. They’re also quite easy to exploit for DDoS attacks. And because the requests generated in this type of attack originate from within the network, they are harder to detect and can quickly overwhelm and compromise a backend database. Future DDoS attackers may use mobile devices to enter specific application-layer resource bottlenecks. Already inside the network, they can then send fewer requests that are significantly more difficult to filter out than DDoS attacks that originate outside the network because they “fit in” with normal queries.

The Top 10 Hidden Network Costs of BYOD

As wireless becomes your primary user network, you need to deliver the availability and performance your users expect from the wired network. BYOD complicates this by increasing network density, bandwidth consumption and security risks. Download this Ipswitch white paper and and learn the top 10 hidden network costs of BYOD.

Related Articles:

Noble Truth #1: Networks Buckling Under BYOD and Bandwidth

College Networks Getting Schooled on BYOD

Security concept: Cctv Camera on digital backgroundThe battle over privacy vs. security is a constant reminder of not just how far the Web has taken us, but how far we have to go to agree on its public usage. On one side you have an army of users who trust you — or aren’t aware they are trusting you — with the sensitive information on their machines. On the other side is an ever-looming governmental presence, which seeks access to users’ data in an effort to protect a much larger set of interests.

How are we to hold these two sides in perfect balance? Is there a perfect balance? Well, yes and no. Bear with me.

Do You Feel Lucky?

Well, do ya? While you may never know the full extent to which the government has “collected” private-sector information, it’s a fair bet that the figure would be humbling. And whether or not you deem this practice justified, surely the rest of the workplace assumes their private information remains just that with support’s help. With this in mind, it’s a good idea to ask yourself, barring the hands you can’t slap away: “Am I actively protecting staff’s privacy?”

As you formulate a response, think about how your staff might react if they found out their privacy had been compromised. Would they — or more likely, their lawyer — see the security measures you do have in place and frown upon them? How about if your own privacy was on the line (and it is)?

Not Where, But How You Draw the Line

It’s important to remember that the fine line you draw between privacy and security isn’t universal. In fact it often isn’t even straight, according to Chris Ellis, former data security officer for a government security contractor and current consultant for all things cybersecurity.

“I think the concept of privacy is a very individual matter,” Ellis suggests. “I’ve met people who wouldn’t bat an eye at checking their bank account on a public computer. When I tell them how easy it can be for someone to steal that information, they’d just shrug it off. On the other extreme, one of my best friends insists on using the ‘Incognito’ tab [Chrome’s private window] for every browsing session, even on his own devices.”

These two archetypes obviously have different thresholds for privacy. It’s ultimately up to the sysadmin to determine which concerns are valid — and to what extent — within the business despite what the government says it needs.

Transparent Policy, Not Security

Ellis’ insight, here, applies to more aspects of your network than you may think. Rather than being a solitary decision based on a static environment, the solution to the privacy vs. security debate is aggregate. Unfortunately for the helpdesk, appeasing everyone’s individual privacy concerns isn’t a practical endeavor. Ellis insists, however, that a happy medium can be found when users are able to appreciate the fragility of online privacy.

“What I’ve come to find is that end users are most concerned with privacy when their information is in someone else’s hands, even legitimately,” he observes. “I’m always surprised to see how much more responsible users are with personal information when organizations are transparent about their security practices and inherent limitations.”

At the end of the day, you can only provide the tools and environments that enable secure data storage and file transfer. As users begin to understand the parameters that separate their own privacy from a greater security standard, they’re less likely to cry foul and more likely to embrace secure habits themselves. I don’t know about you, but in my book that’s a win-win.

Tell your users the risks, show them how they’re protected and provide the tools necessary for them to make up the difference.

>> To learn more about secure managed file transfer, check out our white paper: “Security Throughout the File Transfer Life-Cycle: A Managed File Transfer Imperative”.

What issues lie ahead for IT pros this year?
What issues lie ahead for IT pros this year?

IT pros continue to work diligently behind the scenes to ensure their digital businesses stay connected. Understanding the critical issues most likely to cause a disruption is half the battle. Just one week into 2016, here are 4 problematic issues that will affect the IT pro this year:

1.  Increasing Vulnerabilities and Zero Day Attacks

The harsh reality is that companies are still not doing enough when it comes to vulnerabilities within the corporate network. While accepting some level of risk is part of business, 2016 will continue the trend of more sophisticated and better-funded attackers. For many organizations, the ability to manage vulnerabilities will be the difference between becoming a victim of a cyber-attack or the outlier that takes the steps necessary to protect its network.

2.  Limited IT Security Resources

A recent Ipswitch report shows infrastructure threats are most common among mid-sized IT departments where budgeted resources are limited. The sheer volume of daily tasks for IT teams will be a primary challenge in 2016 and likely necessitate a change to current IT infrastructure in order to provide increased performance, agility and compliance across the business. Prioritization and automation will be essential elements to effective IT in 2016.

3.  IT Pros Get Introduced to the “New” Employee

2016 will mark the introduction of the “new” employee, or an employee that has expectations about personal device usage, remote access and more. As the workforce continues to diversify, IT will need to adapt to new employee demands and their steadfast expectations about how they should work and live, which include working on the devices they are comfortable with while using the applications they prefer. This is the new reality organizations are facing and need to adapt to in order to remain effective.

4.  Deployment of New Technologies

Implementing new technology and respective changes within the industry are top of mind for IT teams heading into 2016. As offerings are varied and far reaching, the ability to select and deploy new tools has become more complicated than ever before. Staying well-informed of the growing complexity and risks within an IT environment will allow IT teams to better manage the process of deploying new technology as well as increasing overall productivity and business continuity.

Since it’s the role of IT to keep the infrastructure protected and running at peak performance, a successful IT team is often the difference between success and failure for any organization. Therefore, identifying and preparing for what’s to come will allow IT to be aware of what to expect in the new year, ultimately bettering the overall business. Issues and challenges are certain to arise, but knowing this and being prepared will allow IT teams to address the problems more quickly and efficiently.

Related articles:

8 Issues Derailing IT Team Innovation in 2016

8 Common IT Communication Challenges and What to Do About Them

This post originally appeared in Virtual Strategy Magazine on December 28, 2015.

internet-of-thingsFor the past few years, the tech industry has become fixated on kicking off the new year with a festival of connected devices at the annual Consumer Electronics Show (CES) in Las Vegas. The fact that this show has become so significant to the tech industry is another indication of the potential importance of the ‘Internet of Things’ (IoT) and growing impact of the ‘consumerization of IT’ on the way IT is adopted and managed.

The idea of the IoT has gained widespread attention because of the increasingly attractive economics of connectivity because of advancements in nanotechnology and the maturation of the Cloud. Sensor technology has become a commodity and can be embedded into almost anything. And the Cloud provides ubiquitous connectivity, almost infinite storage capabilities and easy access to compute power at low costs.

As a consequence of these technological forces, combined with the growing conveniences offered by connected ‘things’, consumers and corporations alike are seeking to find new ways to capitalize on the rapidly expanding universe of IoT. For example, almost a third (31%) of the 378 IT professionals in the U.S. polled by Ipswitch recently identified ‘wearables’ as among the must-have gadgets in 2016.

The significance of these trends and importance of the CES conference is clearly illustrated by the number of CXOs speaking at the event from Intel, IBM and other global technology leaders, as well as major corporations like GM and VW.

Four Primary Levels of Value Driving Business to Adopt IoT Strategies

THINKstrategies believes there are four primary levels of value that are driving businesses to adopt IoT strategies:

  1. To more quickly respond to product/service problems when they arise.
  2. To anticipate issues before they emerge to mitigate the risk of customer problems.
  3. To improve current operations, products and services.
  4. To identify new market opportunities that can transform a business.

Stated even more succinctly, IoT can help organizations better serve their existing customers and pursue new business opportunities.

All of these benefits are particularly important in an era in which it is becoming increasingly difficult to win and retain customers, differentiate your products and services, and gain a sustainable competitive advantage.

However, the IT organization must re-think its role and responsibilities in order to help the broader enterprise successfully capitalize on IoT’s promise.

From Past to Present

In the past, the IT function was narrowly focused on installing and managing computing systems and software programs for primarily internal purposes. IT was originally focused on deploying and administering highly centralized mainframe systems and software utilized by a relatively small team of specialized staff. Distributed computing spread the IT function out to various departments, but still primarily for internal business process purposes. Personal computers forced IT to become more end-user oriented, and laptops demanded a new set of remote access methods so employees could tap internal hardware and software resources. The explosion of mobile devices created new IT challenges regarding security and control, but they still supported internal business processes.

Succeeding in the IoT not only means making the right technological decisions to connect to things, it also means doing so in such a way that the critical data being generated by those things can be captured, analyzed and utilized in a secure fashion to achieve the benefits outlined above.

This brings a whole new meaning to the idea of IT becoming aligned with the business. Therefore, the IT organization must play a more active role in the product/service development lifecycle process to ensure the right sensors, networks, storage, analytics and security are being employed to achieve their IoT business objectives.

As a consequence, IT must adopt new techniques and tools to monitor networks, servers and applications, and ensure data is being transferred securely between products, customers and partners.

Related Articles

The Internet of Things: A Real-World View

‘Twas the Night(mare) Before Christmas for IT Pros