Ipswitch Blog

Where IT Pros Go to Grow

Our Latest Posts

IT team pressure

IT teams work valiantly behind the scenes every day to make sure their digital businesses stay connected. With challenges like dealing with cyber threats and new technology, or even just the sheer volume of day-to-day work, it is getting harder and harder for IT teams to keep necessary innovation from going off the rails. These threats to innovation are most glaring in small to mid-sized IT departments where personnel and budget resources tend to be more limited, and team members need to be both generalists and specialists. These are the true front lines of IT – where decisions need to be made quickly and business operations depend on systems functioning properly.

recent survey by Ipswitch polling 2,685 IT professionals around the world indicated that the top challenges holding IT teams back in 2016 fell into eight distinct categories, with network and application performance monitoring (19 per cent), new technology updates and deployments (14 per cent) and time, budget and resource constraints (10 per cent) among the top responses.

Improving network performance

Ensuring network performance is no easy feat. IT teams are tasked with keeping an organisation’s networks running efficiently and effectively around the clock and need to be concerned with all aspects of network infrastructure, including apps, servers and network connected devices.

Application performance is an important aspect because every company relies on an application on a network and an interruption in performance means a stop to business. Workforce fluidity further complicates network performance, as does the proliferation of devices logging on, whether the activity is sanctioned (work laptops, phones etc.) or surreptitious (many forms of wearable tech).

Many networks were simply not designed to cope with the demands being placed on them today by the increasing number of devices and applications. Furthermore, while balancing the needs of business-critical software and applications over an ever-growing number of connected devices is no easy task for anyone, the modern business world is an impatient place. Just a few instances of crashed websites, slow video payback or dropped calls could soon see customers looking elsewhere. They don’t care what’s causing the problems behind the scenes, all they care about is getting good service at the moment they choose to visit your website or watch your content. As a result, having the insight needed to spot issues before they occur and manage network bandwidth efficiently is an essential part of keeping any network up and running in the IoT (Internet of Thing) age.

The good news is that businesses often already have the monitoring tools they need to spot tell-tale signs of the network beginning to falter, they just aren’t using them to their full ability. These tools, when used well, provide a central, unified view across every aspect of networks, servers and applications, not only giving the IT team a high level of visibility, but also the ability to isolate root causes of complex issues quickly.

Efficient use of network monitoring tools can also allow the IT team to identify problems that only occur intermittently or at certain times by understanding key trends in network performance. This could be anything from daily spikes caused by employees all trying to remotely login at the start of the day, to monthly or annual trends only identified by monitoring activity over longer periods of time. Knowing what these trends are and when they will occur gives the team essential insight, allowing them to plan ahead and allocate bandwidth accordingly.

Evolving performance monitoring 

Infrastructure monitoring systems have evolved greatly over time, offering more automation and more ways to alert network administrators and IT managers to problems with the network. IT environments have become much more complex, resulting in a growing demand for comprehensive network, infrastructure and application monitoring tools. IT is constantly changing and evolving with organisations embracing cost-effective and consolidated IT management tools.

With that in mind, Ipswitch unveiled WhatsUp Gold 16.4, the newest version of its industry-leading unified infrastructure and application monitoring software. The new capabilities within WhatsUp Gold 16.4 help IT teams find and fix problems before the end users are affected, and are a direct result of integrating user feedback in order to provide a greater user experience. Efficient and effective network monitoring delivers greater visibility into network and application performance, quickly identifying issues to reduce troubleshooting time.

One thing is certain when it comes to network monitoring. The cost of implementing such a technology far outweighs the cost of not, especially once you start to add up the cost of any downtime, troubleshooting, performance and availability issues.

Related articles:

8 Issues Derailing IT Team Innovation in 2016

As confirmed by PriceWaterhouseCoopers, attacks against small and midsized businesses (SMBs) between 2013 and 2014 increased by 64 percent. Why? Low price, high reward.

Attackers can break through millions of poorly defended SMBs through automation, gaining access to a treasure trove of data. Small-business vulnerability assessments can identify your weaknesses, but they take time away from daily operations. Is a security vulnerability assessment really worth the resources? These five questions will help you decide.

What Does It Entail?

A vulnerability assessment identifies precious assets as well as how attackers could steal them from you. Not surprisingly, 2014’s most common attack vectors were:

  • Software exploit (53 percent).
  • User interaction, such as opening a malicious email attachment or clicking through an unsafe URL (44 percent).
  • Web application vulnerability, like SQL injection, XSS or remote file inclusion (33 percent).
  • Use of stolen credentials (33 percent).
  • DDoS (10 percent).

It’s impossible to patch every vulnerability. “You can scan and patch 24/7, 365 days a year,” says Forrester security researcher Kelley Mak, “and still not take out a significant chunk.” The key is to identify vulnerabilities that will result in the most damage to your bottom line.

How Frequently Should We Assess?

Frequency depends on what kind of data you store and what kind of business you operate. If you can say yes to the following, you should assess more often:

  • You’ve never assessed security vulnerability before, or it’s been a while. In either case, establish a baseline with frequent assessments for a year or so. Then dial back the frequency.
  • You’re subject to regulatory compliance. If you’re just checking boxes, you’re only getting a limited security picture. Compliance is a baseline, not an effective defensive posture.
  • You’re a contractor for a government agency or valuable enterprise target. Cybercriminals love to use SMB vendors to break into higher-value targets. If one of your employees’ stolen authentication creds cost an enterprise millions of dollars, you’d kiss your contract goodbye.

Can Ops Do It?

Give another sysadmin the SANS 20 recommended list of security controls. If he can understand them, evaluate the business for them and remediate all associated issues, let them handle it.

Already too busy to take on the project? Bring in a specialist. Keep expenses down by getting an initial third-party assessment, drafting an action plan and joining the entire ops team in implementing it.

What Does a Top-Notch Third-Party Assessment Look Like?

Before you hire someone, ask them to explain how they conduct a security vulnerability assessment. According to Robbie Higgins, CISO of AbbVie and author for SearchMidmarketSecurity, their services should include:

  • Information and infrastructure evaluation. The consultant should look at your information systems, stored data, hardware and software. Critical systems like billing, HR, CRM, legal and IP repositories are vital, but you should also focus on minor systems accessible by your own vendors.
  • Current threat landscape. In addition to knowing today’s common exploits and malware trends, your consultant should tell you what types of data attackers are after as of late and what kinds of organizations they’re currently targeting.
  • Awareness of internal soft spots. Attacks don’t always happen because employees are disgruntled. Simple incorrect data entry can expose you to an SQL injection.
  • Estimated impact. Your vendor should explain the degree to which each security vulnerability would affect data integrity, confidentiality and availability of your network resources.
  • Risk assessment. A good vendor combines weaknesses, threat landscape and potential impact to extrapolate your risks in priority order.
  • An action plan. Again, save on security consultation by letting your team execute this roadmap.

Is It Worth It?

Assessments and remediation could cost you in short-term payroll or a third-party consultant’s fee. But if they prevent a data breach that could shut down your business, almost any price is worthwhile.

PrintIt’s a fact of the IT life that technology has a finite lifespan and it’s tough to manage change in technology. Procuring new software and hardware is only half the battle. The other half falls under what happens next and runs the gamut from integration to accessibility to security. This part gets tricky.

Need help? Here are 7 of the most common challenges you’ll face when you manage change during a technology transition, and how to deal with them.

1) Cultural Pushback

IT pros think about the nuts and bolts of new technology implementation from beginning to end, including how to manage . Front-line workers care how a new CRM or analytics tool is going to affect their daily job. IT teams need to communicate why a switchover is happening, the business benefits behind it, and what great things it means to the user. Your best bet is to get them prepared, over-communicate and stay on schedule. Make sure employees and executives alike have had every opportunity to learn what to expect when the transition goes live.

2) Handling Hype

When you manage change in technology you need to manage any hype attached to it. Look at artificial intelligence (AI) solutions. Given their cultural appeal, many users have extremely high expectations and are often disappointed at the end results.  And with respect to the current direction of AI development, according to Hackaday, it’s unlikely that devices will ever live up to expectations. Instead, a “new definition of intelligence” may be required.

In another example, consider the benefits and drawbacks of implementing a new OS such as Windows 10. Some users may want to upgrade to a new OS right away, but we know that an OS switch requires a plethora of testing, such as testing application compatibility and that some of the most important updates for a new OS take at least few months to release.

So what does this mean for IT pros during a tech transition? It means being clear about exactly what new tech will (and won’t) deliver, and communicating this to everyone.

3) Failure Can Happen

Things don’t always go as planned. In some cases new technology can actually make things worse. A recent article from The Independent notes that particulate filters introduced to curb NO2 emissions from vehicles actually had the opposite effect. The same goes for IT. If you are working on a new implementation that is unproven or risky, start small and consider it an A/B test outside the DMZ instead of a big bomb you have to somehow justify blowing up.

4) Risky ROI

While companies love to talk about ROI and technology going hand-in-hand, software-driven revenue is “mostly fiction,” according to Information Week. Bottom line? The more a solution costs to build or buy, the more you’ll need to invest in organizational redesign and retraining. In other words, technology does not operate in a vacuum.

5) Prepare for People

What happens when technology doesn’t work as intended? Employees and executives will come looking for answers. The fastest way to lose their confidence is by clamming up and refusing to talk about what happened or what’s coming next. It may not be worth breaking down the granular backend for them. Being prepared with a high-level explanation and potential timeline for restoration goes a long way toward instilling patience.

6) Lost in Translation

It’s easy for even simple messages to get garbled on their way up the management chain. Before, during and after the implementation of new technology, clarity is your watchword. Short, basic responses in everyday language to tech-oriented questions have the lowest chance of changing form from one ear to the next. You also don’t need to tell all the details. Just tell your users what they need to know. Providing too much information can be harmful and lead to confusion even if they think they understand.

7) It’s Not Fair

Guess what? Even when things are beyond your control, you’re still shouldering the blame. And because new technology implementation never goes exactly as planned, it’s good to have a backup plan. Say you’re rolling out IPv6 support for your website but things aren’t going well; you need an IPv4 reserve in your back pocket to ensure file transfers and page-load times don’t increase your bounce rate or tick off internal staff.

Unfortunately, “it’s not my fault” doesn’t apply in IT, as often as you feel you can say so. On the hook for managing change in technology? Chances are you’ll face at least one in this difficult dozen on the road to effective implementation.

WhatsUp Gold
Click here for a free 30-day trial of WhatsUp Gold

For most companies, a new year means a clean slate to renew goals and focus on success. Here at Ipswitch, we started the year releasing major improvements to WhatsUp Gold with version 16.4.

Diving Deep into WhatsUp Gold 16.4

In a previous blog post, my colleague Kevin Conklin outlined the general highlights of these updates. In this post, I will take a deeper dive into each improvement and its implications for monitoring your networks.

For those of you who want a super deep dive, check out this video that provides an 11 minute technical overview of WhatsUp Gold 16.4.

SSL Certificate Monitor

If you are responsible for web servers that use HTTPS, this monitor can save you serious embarrassment and potential loss of customers and revenue. If a certificate expires on your web server, your customers will be shown a scary expiration message instead of your web page.  While the message does allow your customers to get to your website via a special link, many customers will lose trust in your website and will simply abandon the page, and might never return.

To solve this nasty problem, the SSL Certificate Monitor will alert you a number of days before a certificate expires, based on a warning time frame that you select.   A common setting for this monitor is 30 or 60 days.  Thus, if an alert is triggered, you will have plenty of time to get a new certificate and load it before the current certificate expires.  Your customers will never know.

In addition to checking for certificate expiration, the monitor also tests to see if the DNS name of the web server matches the canonical name in the certificate. This is also a frequently encountered configuration error which can cause angst for your customers.  With this monitor, you can ensure that this configuration error never happens.

File Content Monitor

In WhatsUp Gold 16.4 we added a great new tool for the IT admin’s toolbox: the file content monitor. The monitor is deceptively simple.  It scans a text file or files of your choosing looking for a string, and then alerts if it finds the string.  This opens up WhatsUp Gold to monitoring lots of things that it couldn’t before, except through custom scripting.

A common use case is to monitor the logs of custom applications. Let’s say that a custom application puts the word ‘error’ into a log text file when some problem occurs.  Using this monitor, you can be alerted when this happens.  We’ve made sure that the monitor remembers where it was in the log file between polls, so it won’t alert again on the same error.  Or, you can have the monitor read the log file from the start on each poll, which handles other logging use cases, such as when a log file is re-written on a regular interval.  This is one of those monitors that can be used in all sorts of creative ways in diverse networks.

Flow Monitor

Flow Monitor is a great plugin for WhatsUp Gold. It gives IT admins a detailed view of their network like no other part of the WhatsUp Gold platform.  We’ve made a couple of key changes to Flow Monitor in this release.

First, on the Flow Sources page, we have added better sorting and filtering. You are now able to filter sources based on DNS names and IP addresses, or any part thereof.  If you want to see the sources that have interfaces in the 192.168.1.x network, no problem.  Just type that in ‘192.168.1’. In addition, we’ve added better sorting on the sources page.  Both these improvements have been asked for by our customers, especially for those dealing with a large number of flow sources.

In addition to these interface improvements, we’ve also added two new reports: Top Endpoints, and Top Endpoint Groups.  A common use for Flow Monitor is to show which devices on your network are sending or receiving the most data.  We have reports like Top Senders and Top Receivers for this.  But we’ve never had a report that showed the devices on your network based on total traffic, both sending and receiving.  That’s what the Top Endpoints report does.  In addition, like many of our other reports, we have a version of it meant for groups of IP addresses that you define, giving you a way to make your environment more understandable.  With these two reports, you can really get at your bandwidth hogs like never before.

What’s Next

These improvements will make using monitors in WhatsUp Gold easier and more user friendly. In 2015, we identified the places WhatsUp Gold could be stronger and more useful on a day-to-day basis. This work prepared us to launch these exciting upgrades in 2016 and start the year off right.  Look for my next blog post for another deep dive into more new features.

government-monitoringWeb security consists of multiple moving parts that can move in opposite directions. As a result, actions or technologies that improve one aspect of security may weaken another. Some enhancements might end up compromising your overall Web security.

An entanglement of just this sort builds even more complexity around the issue of government monitoring. Should Web traffic be limited in how much merits encryption? Should law enforcement have “back door” access to encrypted activity? More to the point, what are the security implications of these policies or standards with respect to your department?

This concern isn’t about government traffic monitoring in general, however strong (and mixed) many people’s feelings may be about the government monitoring personal content. Your questions relating to encryption are narrower and less ideological, in a sense, because they carry profound implications for your company’s Web security.

A Double-Edged Sword

Online encryption wars are not new; as Cat Zakrzewski reports at TechCrunch, the debate goes back two decades. With so many growing more concerned about Web security, though, the issue has new urgency. In a nutshell: It is widely agreed in cybersecurity that encryption — particularly end-to-end encryption — is one of the most powerful tools in your infosec toolbox. For thieves, stolen data is a worthless jumble if they can’t read it. That’s the point of encryption.

End-to-end encryption provides a layer of protection to data over its full journey, from sender to recipient. Wherever thieves may intercept it along the way, all they can steal is gibberish. Law enforcement’s concern about this depth of encryption, however, is that anyone can use it — from terrorists to common criminals, both of whom have particularly strong reason to avoid being overheard. Moreover, new categories of malware, such as ransomware, work by encrypting the victim’s data so the blackmailer can then demand assets before decrypting it to make it usable again.

For Whom the Key Works

This problem is difficult, but not unusual: If lockboxes are available, cybercriminals can use them to protect their own nefarious secrets. The effective legal response is to then require that all lawfully sold lockboxes come with a universal passkey available to the police, who can then open them. There’s your back-door access.

But that’s where things get complicated. If a universal passkey for back-door access exists, it could potentially fall into the hands of unauthorized users — who can use it to read any encrypted message they intercept. Your personal mail, your bank’s account records, whatever they get access to.

(The NSA and its affiliates abroad can build their own encryption engines without this vulnerability, but such high-powered technology isn’t cheap — beyond the means of most criminals, terrorists and the like, of course.)

More Keys, More Endpoints

A special passkey available to law enforcement would presumably be very closely held, and not the sort of thing bad actors are likely to get their hands on by compromising an FBI clerk’s computer. But the primary concern in cybersecurity is that the software mods needed to provide a back door would make encryption less robust. This means encryption will be less effective for all uses, even the most legitimate ones.

In essence, a lock that two different keys can open is inherently easier for a burglar to pick. According to Reuters, White House cybersecurity coordinator Michael Daniel acknowledged he knew no one in the security community who agreed with him that a back door wouldn’t compromise encryption.

Crucially, this problem is independent of any concern about the governmental misuse of back-door decryption technology. Even if no government agency ever used the back door to decrypt a message, its existence makes it possible for a third party to reverse-engineer the key, or exploit a subtle bug in the backdoor functionality — thus enabling them to read the once-encrypted messages.

Encryption isn’t an absolute security protection; nothing is. But it is one of the most powerful security tools available, and your team is rightfully concerned about the risks of compromising it.

employee-dataThe hotshot developer your company just lost to a competitor could also be your biggest security risk from employee data theft. You shouldn’t wait until he’s left carrying a 1TB flash drive full of trade secrets to worry about what else may have just walked out the door.

But suppose you need to clean up a mess, or prevent one from occurring after somebody moves on. What steps can you take?

From Irate to Exfiltrate

First, understand what you’re stepping into. Employee exfiltration is an underreported problem in network defense. Whether because a former staffer has become disaffected, angry or simply accepting of a better offer elsewhere, there are many ways for a motivated knowledge worker to remove important data. And an IT pro is a special category of knowledge worker for whom data exfiltration is the greatest risk.

Back in 2010, as reported by Network World, DARPA asked researchers to study the ways they could improve detection and defense against network insiders. That program, Cyber Insider Threat (CINDER), attempted to address employee data theft — within military or government facilities. Those DARPA contracts were awarded because insider threats were generally neglected, due in part to a dominant perimeter threat mentality.

Research was well underway when in 2013 Edward Snowden demonstrated the full potential for data exfiltration to any remaining disbelievers.

The takeaway for every system administrator and CSO: If you’re only focused on tweaking firewall settings, you may be at risk. Your company’s lost data probably won’t be published in The Guardian or the The New York Times, and you won’t be grilled on “60 Minutes.” But you’d be right to sweat it.

Post-Termination Steps

After a termination, there are many steps you could take. The proper course of action will depend upon the employee’s access to data, organizational role and, generally, a mature risk assessment framework. Here are a few to point you in the right direction:

  1. Today, many employees have company data on their mobile devices. Company-owned or company-managed phones may have remote wipe features, such as through Google Apps. Use these to purge sensitive data.
  2. Revocation of encrypted datasets is an approach that, according to TechTarget, allows you to revoke the ex-employee’s certificate.
  3. Study logs, using tools such as the Ipswitch Log Management Suite, enable you to identify potentially anomalous activity over an extended period of time. The theft may not be recent.
  4. Examination of Windows event logs can help identify whether the ex-employee attached USB devices to a company workstation.
  5. Catalog all applications accessed by the employee, both on-premises and cloud applications.
  6. Working with affected line-of-business managers, identify any sensitive datasets.
  7. If the ex-employee had root or sysadmin privileges, wholesale permission schemes and passwords may need to be updated, especially for off-premises resources.
  8. Ex-employee-managed workstations (and possibly server instances) should be quarantined for a period of time before returning them to the asset pool.
  9. For especially sensitive settings, heightened audit and log monitoring of coworkers for a limited period of time may be called for.
  10. For ex-employees who enjoyed privileged access to IT resources, tools such as Ipswitch WhatsConfigured can identify attempts to relay data to offsite servers or sabotage applications.
  11. Know your application risks. Web conferencing tools like WebEx and GoToMeeting, for example, provide the means to share data outside the corporate sandbox.

Match Points

As with other sysadmin duties, you’ll have to decide how much effort you should put into mitigating a potential data loss. Knowing which data has been lost and the potential business impact may be just as important as knowing which logs to examine. In the meantime, don’t overwhelm yourself with false alarms, and don’t underestimate your opponent. These steps can help you even after the employee has left. Best practices have it that you’ve done much more before the termination event.

You’ve probably ceded the first few moves to your opponent. A determined adversary’s next moves might well include tripwires, sniffers and other mischief — at which point you’re going to need even more tools to get things back to normal.

internet of things

CES, the first big technology event of 2016, wrapped in Vegas last week and as expected, the Internet of Things (IoT) was a hot topic. If last year’s show was the one where everyone heard about the potential impact of disruptive technology, this year was certainly the year we saw the breadth and depth of the IoT. From the EHang personal minicopter to more fitness tracking devices than you could, erm well, shake a leg at, CES 2016 is abuzz with news of how technology is shrinking, rolling, flying and even becoming invisible.

With everything from ceiling fans to smart feeding bowls for pets  now connecting to the expanding Internet of Things, it’s time to ask how network and IT pros can cope with the escalating pressure on bandwidth and capacity.

Whether we like it or not, the world is becoming increasingly connected. As the online revolution infiltrates every aspect of our daily lives, the Internet of Things (IoT) has gone from an industry buzzword to a very real phenomenon affecting every one of us. This is reflected in predictions by Gartner, which estimates 25 billion connected ‘things’ will be in use globally by 2020. The rapid growth of the IoT is one of the key topics at this year’s CES. SAIC’s Doug Wagoner’s keynote speech focused on the how the combination of government and citizen use of the IoT could see up to double Gartner’s predicted figure of internet-connected items and hit 50 billion devices within the next five years.

It’s easy to see why. Just as sales of original IoT catalysts such as smartphones and tablets appear to be plateauing, emerging new tech categories including wearables, smart meters and eWallets are all picking up the baton. The highly anticipated Apple Watch sold 47.5 million units in the three months following its release. Health-tech wristbands, such as Fitbit, have also been very successful and were estimated to reach 36 million in 2015, double that of the previous year. Fitbit announced its latest product, the Fitbit Blaze smartwatch, at the show and is marketing it as a release which will ‘ignite the world of health and fitness in 2016’. Devices are becoming increasingly popular and mergers with fashion brands to produce fashionable and jewellery items are set to see their popularity continue to grow.

It doesn’t end there either. Industry 4.0 and the rise of the ultra efficient ‘Smart Factory’ looks set to change the face of manufacturing forever, using connected technology to cut waste, downtime and defects to almost zero. Meanwhile, growing corporate experimentation with drones and smart vehicles serves as a good indicator of what the future of business will look like for us all.

But away from all the excitement, there is a growing concern amongst IT teams about how existing corporate networks are expected to cope with the enormous amount of extra strain they will come under from these new connected devices. With many having only just found a way to cope with trends such as Bring Your Own Device (BYOD), will the IoT’s impact on business networks be the straw that finally breaks the proverbial camel’s back?

The answer is no, or at least it doesn’t have to be. With this in mind, I wanted to look at a couple of key areas most likely to be giving IT teams taking care of companies’ networks sleepless nights and how they can be addressed. If done effectively, not only can the current IoT storm be weathered, but businesses can begin building towards a brighter, more flexible future across their entire network.

1) Review infrastructure to get it ready for The Internet of Things

Many networks were simply not designed to cope with the demands being placed on them today by the increasing number of devices and applications. Furthermore, while balancing the needs of business-critical software and applications over an ever-growing number of connected devices is no easy task for anyone, the modern business world is an impatient place. Just a few instances of crashed websites, slow video playback or dropped calls could soon see customers looking elsewhere. They don’t care what’s causing the problems behind the scenes, all they care about is getting good service at the moment they choose to visit your website or watch your content. As a result, having the insight needed to spot issues before they occur and manage network bandwidth efficiently is an essential part of keeping any network up and running in the IoT age.

The good news is that most of businesses already have the monitoring tools they need to spot tell-tale signs of the network beginning to falter, they just aren’t using them to their full ability. These tools, when used well, provide a central, unified view across every aspect of networks, servers and applications, not only giving the IT team a high level of visibility, but also the ability to isolate root causes of complex issues quickly.

Efficient use of network or infrastructure monitoring tools can also allow the IT team to identify problems that only occur intermittently or at certain times by understanding key trends in network performance. This could be anything from daily spikes caused by employees all trying to remotely login at the start of the day, to monthly or annual trends only identified by monitoring activity over longer periods of time. Knowing what these trends are and when they will occur gives the team essential insight, allowing them to plan ahead and allocate bandwidth accordingly.

2) Benchmark for wireless access and network impact

 The vast majority of IoT devices connecting to the business network will be doing so wirelessly. With wireless access always at a premium across any network, it is critical to understand how a large number of additional devices connecting this way will impact on overall network performance. By developing a benchmark of which objects and devices are currently connecting, where from, and what they are accessing, businesses can get a much better picture of how the IoT will impact on their network bandwidth over time.

Key questions to ask when establishing network benchmarks are:

  • What are the most common objects and devices connecting? Are they primarily for business or personal use?
  • What are the top consumers of wireless bandwidth in terms of objects, devices and applications?
  • How are connected objects or devices moving through the corporate wireless network, and how does this impact access point availability and performance, even security?

By benchmarking effectively, businesses can identify any design changes needed to accommodate growing bandwidth demand and implement them early, before issues arise.

3) Review policies – Security and compliance

In addition to the bandwidth and wireless access issues discussed above, the proliferation of the IoT brings with it a potentially more troublesome issue for some; that of security and compliance. In heavily regulated industries such as financial, legal and healthcare, data privacy is of utmost importance, with punishments to match. And it is an ever-changing landscape. New EU data privacy laws that will affect any business that collects, processes, stores or shares personal data have recently been announced.

Indeed, businesses can face ruinous fines if found in breach of the rules relating to data protection. However, it can be extremely difficult to ensure compliance if there are any question marks over who or what has access to the network at any given point in time. Unfortunately, this is where I have to tell you there is no one-size-fits-all solution to the problem. As more and more Internet enabled devices begin to find their way onto the corporate network, businesses must sit down and formulate their own bespoke plans and policies for addressing the problem, based on their own specific business challenges. But taking the time to do this now, rather than later, will undoubtedly pay dividends in the not-too-distant future. When it comes to security and compliance, no business wants to be playing catch up.

The Internet of Things is undoubtedly an exciting phenomenon which marks yet another key landmark in the digitisation of the world as we know it. However, it also presents unique challenges to businesses and the networks they rely on. Addressing just a few of the key areas outlined above should help IT and network teams avoid potential disruption to their business (or worse) as a result of the IoT.

 

iStock_000003764237_Small

Spotify recommends the next album you should play. Twitter customizes moments and stories you should be reading. The flash sale site Gilt personalizes online shopping down to your favorite brands and discounts. The best experiences on the Web and mobile apps involve clean design, interesting details and intuitive interactions. When companies strive to offer this experience in their apps, they often turn to the LAMP stack for enablement.

The LAMP Stack

The acronym “LAMP” was aptly named for its four original open-source components including Linux, Apache, MySQL and PHP. Over the years the LAMP stack has evolved to include alternatives, while retaining its open-source roots. For example, the “P” can now also mean Perl or Python programing languages.

The open-source nature of each component in the LAMP stack has three distinct advantages for IT pros:

  • Each tool is free to use, saving money
  • Licenses have non-restricted parameters, expanding usage of each tool
  • Nothing is dependent on vendors to fix bugs, allowing you to address any issues personally

So what does that mean to infrastructure managers? Well, that great experience depends on the underlying infrastructure being up and running. So let’s break that down a little starting at the top of the LAMP stack.

Linux

Linux provides the OS layer of the stack. Here we need to monitor for memory bottlenecks, CPU load, storage or network issues that can affect the core performance of the entire stack.

Apache

Apache is one of the most popular web servers and offers a static web structure as the basis for your app. It is good practice to start monitoring at this layer for fundamental issues that can tank apps. You’ll want to watch for things like number of incoming requests, web server load and how much of the CPU is being used.

MySQL

Today’s web apps track habits, remember login data and predict user behavior. This streamlines web browsing and communication. But to keep all this running as it should, the stack requires a database. The LAMP stack usually relies on MySQL server database to store this information but again substitutions are often made including PostgreSQL and even MongoDB.

PHP

PHP is a server-side scripting language designed specifically for web but also used as a programming language. It enables developers to add dynamic interactions with app and web users. As noted, Perl and Python can also be used as alternatives to PHP.

WhatsUp Gold Visibility Expands to Include Linux and Apache

Over the years, we’ve extended WhatsUp Gold beyond its origins in the network to include the physical and virtual servers, and core and custom apps your organization relies on to run the business. In WhatsUp Gold version 16.4 announced last week, that visibility expands to include Linux and Apache to give you the end-to-end infrastructure perspective you need to provide quick identification and remediation of developing issues.

The LAMP stack is important to businesses implementing applications that will improve the customer experience and drive profits. The business case for using LAMP relies heavily on lack of restrictions and ease of implementation. This is especially important for small and mid-sized IT teams where the focus has to be on top level improvements as opposed to base level functionality.

This is also a key reason why so many small IT teams depend on Ipswitch and our WhatsUp Gold product. It’s powerful, comprehensive, easy to use and customize to your needs and leads the industry in low cost of ownership. And it is precisely our respect for these small IT teams that drives us to develop things like support for Linux and Apache and integrated in your unified infrastructure view for free.

Oh, and by the way, we also added support for Java Management Extensions (JMX) so you can monitor Java apps as well, but that’s a story for another day.

Related article:

What’s New in WhatsUp Gold 16.4

 

 

handle iso certificationThe International Organization for Standardization (ISO) is a non-governmental entity of 162 standardizing bodies from multiple industries. By creating sets of standards across different markets, it promotes quality, operational efficiency and customer satisfaction.

Businesses seek ISO certification to signal their commitment to excellence. As a midsized IT service team implementing ISO standards, you can reshape quality management, operations and even company culture.

Choosing the Right Certification

The first step is to decide which sets of standards apply to your area of specialization. Most sysadmins focus on three sets of standards: 20000, 22301 and 27001.

  • ISO 20000 helps organizations develop service-management standards. It standardizes how the helpdesk provides technical support to customers as well as how it assesses its service delivery.
  • ISO 22301 consists of business continuity standards designed to address how you’d handle significant external disruptions, like natural disasters or acts of terrorism. These standards are especially relevant for hospital databases, emergency services, transportation and financial institutions — anywhere big service interruptions could spell a catastrophe.
  • ISO 27001 standardizes infosec management within the organization both to reduce the likelihood of costly data breaches and to protect customers and intellectual property. In support of ISO 27001, ISO 27005 offers concrete guidelines for security risk management.

Decisions, Decisions

Deciding which ISO compliance challenge to tackle first depends on a few different things. If your helpdesk is already working within a framework like ITIL — with a customer-oriented, documented menu of services — ISO 20000 certification will be an easy win that can motivate the team to then tackle a bigger challenge, like security. If you’re particularly concerned about security and want to start there, try combining ISO 22301 and ISO 27001 under a risk-management umbrella. Set up a single risk assessment/risk treatment framework to address both standards at once.

Getting Started

ISO compliance is not about checking off boxes indicating you’ve reached a minimum standard. It’s about developing effective processes to improve performance. With ISO 22301 and 27001, you’ll document existing risks, evaluate them and decide whether to accept or reduce them. With ISO 20000, you’ll document current service offerings and helpdesk procedures like ticket management and identify ways to reduce time to resolution.

Prioritizing

ISO compliance looks a little different to every organization, and IT finds its own balance between risk prevention and acceptance. For instance, if a given risk is low and fixing it would be inexpensive, accept the risk, document it and don’t throw money at preventing it. Whichever standard you start with, though, keep a few principles in mind:

  • Focus on your most critical business processes. Identify what your organization can least afford to lose — financial transactions processing, for example. On subsequent assessments, you can dig deeper into less crucial operations.
  • Identify which vulnerabilities endanger those processes. Without an effective ticketing hierarchy at the helpdesk, a sysadmin could wind up troubleshooting an employee’s flickering monitor while an entire building loses network connectivity.
  • Avoid assessing every process or asset at first. Instead of looking at all in-house IP addresses for ISO 27001, focus on the equipment supporting your most important functions. Again, you can dig deeper after standardizing the way you manage information.
  • Don’t chase irrelevant items. Lars Neupart, founder and CEO of Neupart Information Security Management, finds that ISO 27005 threat catalogs look like someone copied them from a whiteboard without bothering to organize them. Therefore, don’t assume every listed item applies to every situation. As Neupart puts it; “Not everything burns.”
  • Put findings in terms that management can understand. When you’re asking management to pay for implementing new helpdesk services or security solutions, keep your business assessments non-technical. Put information in numerical terms, such as estimating the hourly cost of downtime or the percent of decline in quarterly revenue after a data breach.

So, How Much Is This Going to Cost?

Bonnie del Conte is president of CONNSTEP, Inc., a Connecticut-based company that assists companies in implementing ISO across a range of industries. She says the biggest expenses related to ISO certification are payroll, the creation of assessment documentation and systems (e.g., documentation for periodic assessments, including both paper and software) and new-employee training programs. Firms like hers stipulate consulting fees in addition to the actual certification audit. At the same time, hiring a consultant can reduce the time intervals for standards implementation and audit completion — and prevent mistakes.

Why It’s Worth It

The ultimate goal of ISO certification is to generate measurable value and improvement within IT. It’s about how proactive, progressive awareness and problem-solving prevents disasters, improves service and makes operations more efficient. Its greatest intangible benefit, says del Conte, is often a better relationship between IT and management. “Companies find improved communication from management,” del Conte says, “due to more transparency about expectations and the role that everyone has in satisfying customer expectations.”

Don’t try to become the perfect IT service team or address every security vulnerability the first time around. Hit the most important points and then progressively look deeper with every assessment cycle. As your operations improve, so will IT’s culture and its relationship with the business side. If ISO certification helps you prove that IT is way more than a cost center, it’s worth the investment.

complete-network-visibilityA decade ago, organizations expected a disconnect between IT and other relevant business units. After all, support was little more than a cost center, necessary to keep enterprises up and running but outside line-of-business (LoB) revenue streams. Movement in cloud computing, big data and, more recently, the burgeoning Internet of Things (IoT) have caused this trend to do a one-eighty.

IT is now a critical part of any boardroom discussion, with total network visibility playing a lead role in a company’s pursuit of a healthier bottom line. According to Frost & Sullivan, in fact, the network monitoring market should reach $4.4 billion in just two years, double the market revenue in 2012. Of course, talking about the benefits of a “single pane of glass” is one thing; IT pros need actionable scenarios to drive better budgets and improve productivity. Here’s a look at the 10 top cases for total network transparency.

1) Security

As noted by FedTech Magazine, the enemy of IT security is a lack of visibility. If you can’t view your network end-to-end, hackers or malware can slip through undetected. Once inside, this presence is given free rein until it brushes up against continuously monitored systems such as payment portals or HR databases. Complete visibility lets admins see security threats the moment they appear, and respond without delay.

2) Automation

Single-pane-of-glass visibility also lets IT pros automate specific tasks to improve overall performance. Consider eDiscovery or big data processing; while you can configure and perform these tasks manually, the IT desk’s time is often better spent forwarding strategic business objectives. Total network visibility allows you to easily determine which processes are a good fit for automation and which are best left in human hands.

3) Identification

According to a recent Clearswift report, 74 percent of all data breaches start from inside your organization. In some cases employees are simply misusing cloud services or access points, whereas in others, the objective is to defraud or defame. Either way, you need to know who’s doing what in your network, and why. Visibility into all systems — and who’s logging on — helps combat the risk of insider threats.

4) Prediction

You can’t always be in the office. What happens when you’re on the road or at home but the network still requires oversight? Many monitoring solutions now include mobile support, allowing you to log in from a smartphone or tablet to check on current conditions. This is especially useful if you’re out of town but receive warning about severe weather moving in. Total visibility gives you the lead time needed to prep servers and backup solutions to handle the storm.

5) Analytics

Effective data analysis can make or break your bottom line. As noted by RCR Wireless, real-time network visibility is crucial here. The goal is interoperability across systems and platforms to ensure data collection and processing happens quickly enough to provide actionable insight into the key needs of your network.

6) Budget

With a seat at the boardroom table, CISOs and CIOs must now justify IT budget requests as part of their business strategy at large. Using a single pane of glass lets you showcase exactly where investments are paying off — analysis tools or intrusion-detection solutions, for instance — and request commensurate funding to improve IT performance.

7) Proactive Response

It’s better to get ahead than fall behind, obviously, but think of it this way: Network visibility lets you see infrastructure problems in their infancy rather than only after they affect performance. Proactive data about app conflicts or bandwidth issues gives you the upper hand before congestion turns into a backlog of issue tickets.

8) Metrics

Chances are you’ll be called to the boardroom this year to demonstrate how your team is meeting business objectives. Complete visibility lets you collect and compile key metrics that clearly show things like improved uptime, amount of data backed up or new devices added to the network.

9) Training

According to Infosecurity Magazine, 72 percent of IT professionals believe their company isn’t doing enough to educate employees about IT security. With insider threats at an all-time high, network visibility is critical to pinpoint key vulnerabilities and design effective training plans for employees to reduce the chances of a data breach.

10) End-User Improvement

Technology doesn’t always work as intended. And in many cases, employees simply live with poor function — they grumble but don’t report network slowdown or crashing apps. Without this data, you can’t improve the system at large. With total network insight, however, you can discover end-user pain points and take corrective steps.

Seeing is believing. More importantly, seeing everything on your network is actionable, insightful and bolsters the bottom line.

advanced-persistant-threatsIt’s been a year since Sony Pictures employees logged into their workstations, expecting to start a normal workday, when they were greeted by soundbites of gunfire, images of skeletons and threats scrolling across their monitors. To date, the Sony Pictures attack is arguably the most vivid example of advanced persistent threats used to disable a commercial victim. A corporate giant was reduced to posting paper memos, sending faxes and paying over 7,000 employees with paper checks.

How Advanced Persistent Threats Work

Writing for the Wall Street Journal, security expert Bruce Schneier defines advanced persistent threats (APTs) as the most focus- and skill-oriented attacks on the Web. They target high-level individuals within an organization, or attack other companies that have access to their target.

After gaining login credentials, cybercriminals gain admin privileges, move data and employ sophisticated methods to evade detection. APTs can persist undetected in networks for months, even years.

What They Do

Most APTs are deployed by government agencies, organized factions of cybercrime or activist groups (often called “hacktivist” groups). According to Verizon’s most recent Data Breach Investigations Report, APTs primarily target three types of organizations: public agencies, technology/information companies and financial institutions.

Some APTs are designed to steal specific information, like a company’s intellectual property. Other APTs, such as the Stuxnet worm, are used to spy on or even attack another government. APTs like those launched by Sony’s attackers seek to embarrass one organization for a particular grievance. Hackers reportedly had a beef with Sony back in 2005, when the company implemented anti-piracy software into its CDs.

Peter Elkind, writing for Fortune, reported that attackers using advanced persistent threats managed to disable Sony Pictures by:

  • Erasing the storage data on 3,262 of 6,797 personal computers and nearly half of its network servers.
  • Writing over these computers’ data in seven different ways and deleting each machine’s startup software.
  • Releasing five Sony Pictures films, including four unreleased movies, to torrent sites for downloading.
  • Dumping 47,000 Social Security numbers, employee salary lists and a series of racist internal emails directed at President Obama.

Limiting Damage from APTs

Maintaining patches and upgrades, using an antivirus and enabling network perimeter detection are worthy defense strategies, but they rarely work against an intruder who’s in possession of high-level login credentials. With sufficient skills, resources and time, attackers can penetrate even the most well-fortified network. Organizations should start by using least-privilege security protocols and training critical employees to recognize and avoid spearphishing attacks.

While you’re at it, use network monitoring to detect APTs early, and watch for the telltale signs of an attack in progress. Some of these are as follows:

Late-Night Login Attempts

A high volume of login attempts occurring when no one’s at work is a simple but critical APT indicator. They may appear to come from legitimate employees, but they’re actually attackers — often in another timezone, according to InfoWorld — using hijacked credentials to access sensitive information at odd hours.

Backdoor Trojans

By dropping backdoor Trojan horse malware on multiple endpoint computers, attackers maintain access to the system even when they lose access in another area. Security personnel should never stop after finding a backdoor Trojan on one computer; there may be more still on the network.

Shadow Infrastructure

Attackers frequently set up an alternate infrastructure within the existing network to communicate with external command-and-control servers. Rogue agents have even been known to set up a series of spoof domains and subdomains based on old company names to appear legitimate. When people visit the real domain, the attackers’ C&C server would redirect them to fake URLs.

Outbound Data Abnormalities

InfoWorld also suggests looking for strange movements of outbound data, including those against computers within the company’s own network. Attackers love to build internal “way stations,” assemble gigabytes of data and compress the files before extracting them.

Threat intelligence consultants are always at your disposal, but they shouldn’t be the ones who wait for 15 minutes — surrounded by logged-in workstations — before a single human comes to greet them. To be prepared for a major attack, today’s IT departments should fortify security and network monitoring tools to detect APTs, and tell any contractors they work with to do the same.

scripting-nightmaresScripting is a popular and powerful choice to automate repeatable file transfer tasks. It can be horrifying, though, to discover just how many scripts are relied on for the successful operation of your infrastructure. This and the time they take to execute are sure to raise the ire of managers and executives. Here are some alternative file-based automation methods to reduce time and errors in common tasks that can benefit from file encryption, deployment redirection, scheduling and more.

DevOps and Automation

It used to be that deployment meant occasionally copying files to a set of physical servers that sat in a datacenter, maybe even on premises. Often this was done via FTP and terminal sessions directly with server names or addresses. If you were smart, you created scripts with the files, their locations and servers hardcoded into them.

Automated scripts are an excellent first step, but they have limitations. When a new server is added, for example, an old one is upgraded or replaced, or virtualization demands changing names and addresses. The result? Script failure. Also, changing OS platforms means a single set of scripts won’t work across all of your servers. Scripts can be error-prone, too, and slow down if they’re not compiled ahead of time.

With the emergence of agile and DevOps practices, there’s no time to manage these ever-changing environments, so the simplest route is to not do it at all. But because you still need to deploy software somewhere, API-based systems help you achieve the automation required without hardcoding the details. The end product is a much more efficient file transfer process.

SLAs: Performance and Security

File-based automation reduces the time you spend scripting. How? Encryption, or redirecting files upon their arrival to the right servers. The consistency and predictability of this process ensures you meet your service-level agreements (SLAs), which stipulate repeatability and the removal of errors. But, in order to achieve the performance metrics you need when working in an agile and nimble organization, you need more than that.

An enterprise-grade managed file transfer solution enables you to transfer files reliably, securely and quickly. Look for a solution that offers an event-driven workflow wherein processes are kicked off either according to a schedule or on-demand based on a trigger. Additionally, file transfer workloads need to happen in parallel, simultaneously deploying across your environments to limit the time it takes to deploy changes.

For peace of mind (yours, specifically), your file transfer solution needs to be hardened. Make sure it uses encryption for all file transfers and integrates with your enterprise identity management to control access to all of your environments. Ultimately, this helps you to conform to the requirements of the most regulated markets (health care and financial, for instance) — as well as local legislation. Your security control should be automated as well, through the use of policy enforcement with secure solutions for authentication (think RADIUS or LDAP).

Finally, you need to know the status of all transfer operations at a glance. With DevOps, constant process monitoring and measuring will lead to further improvements and the removal of bottlenecks. Ensure you have the proper level of reporting and visualization into your file transfers, including those completed, those that may have failed and those that are ongoing.

Moving Files To and From Anywhere

You may need to encrypt and move a file that was just extracted from a business system and is now sitting on a shared drive inside the trusted network. Maybe you need to move an encrypted file sitting on an FTP server in your business partners data center. You need the flexibility to encrypt, rename, process and transfer files from any server and deliver them where you need it.

Don’t Forget the Cloud

Whether you’re still working on-premises or you’ve already moved many of your systems to the cloud, your file transfer processes should work across both. The reality is that most organizations will continue to keep data living on both, likely settling on a hybrid on-premises and public-cloud mix for security and control purposes. Just as the cloud promises to transparently move user workloads across servers in both environments, your file transfer and deployment solution should do the same. In the end, good management will treat you like the hero you are.