Ipswitch Blog

Where IT Pros Go to Grow

Our Latest Posts

How the U.S. Armed Forces Monitor Tactical Deployments 5
U.S. armed forces take mobile network monitoring very seriously

When you think about where IT pros work you most likely envision a typical climate-controlled office setting. In reality IT happens everywhere. Even out in the field within high-pressure tactical deployments created by the U.S. armed forces.

Unique Requirements for U.S Armed Forces Network Operations

These mobile military networks come in all shapes and sizes. They are often used to support radar, command and control, and satellite communication systems. The deployments can be more confined than you think. These confined networks out in the field don’t live in a server closet. These mobile infrastructures that require network monitoring are deployed within a tent, an aircraft carrier, a destroyer, and even moving tactical vehicles. These are serious network operations that must be quickly deployed and mobile to meet the needs of our U.S. armed forces.

Government agencies and their contractors in the field have very specific needs for network monitoring tools. They work in isolated locations where the network simply cannot go down. This makes security, reliability and ease of operation a must. Their tools need to be deployed without hassle. Young soldiers with only a few months of IT training must be able to jump in and use the software effectively from day one. These tools also need to have a lot of value so they deliver exactly what is needed, without breaking the budget. Finally, within tactical deployments, network monitoring needs to be accessible, reliable and flexible.

Why U.S. Armed Forces and Government Agencies Choose WhatsUp Gold

All of these unique requirements by the U.S. armed forces and government agencies helps explain why Ipswitch WhatsUp Gold infrastructure monitoring tools are so often used within tactical deployments and traditional network environments. WhatsUp Gold has a small footprint with low-megabyte consumption, reasonable system requirements, and agentless architecture. In terms of budget, we can support a network with as few as 25 devices.

“The ceaseless damage control provided by WhatsUp Gold never failed in monitoring the only working civilian network of an entire nation. We couldn’t have done our job in Iraq without WhatsUp Gold doing its job.”

– U.S. Defense Contractor

WhatsUp Gold can also be deployed in a matter of minutes, ensuring full and quick coverage. This fast setup and user-friendly interface is enabled by a powerful discovery engine that applies monitors and policies to the appropriate switches and devices.

Even mobile networks that move from location to location use WhatsUp Gold because of its simplicity and dependability. For example, Cisco’s Emergency Response Team uses WhatsUp Gold to monitor local, state and Federal government networks across the U.S. and Canada that have experienced an outage due to natural disaster. With quick setup, Cisco can get an essential government network back online quickly and easily.

Concept Proven

I spoke recently with a network manager who was evaluating WhatsUp Gold for his new employer. Why did he pick us? Because our products helped support his tactical networks during three tours in Afghanistan. That’s proof of concept for you.

 

AHPRA data breach is a healthcare community wake up call
Protecting personal identifiable information (PII) is everyone’s business.

I’ve been reading about today’s news from Australia regarding allegations of a data breach against the Australian Health Practitioner Regulation Agency (AHPRA) agency. Guardian Australia reported that an AHPRA employee assaulted a nurse over a personal grudge, after using his credentials to access her home address and phone number last September. AHPRA functions like a watchdog group and investigates complaints against Australian healthcare practitioners

Additionally, in 2014 another AHPRA employee used her credentials to access medical records regarding a complaint made against her as a midwife, and used the information in a court proceeding.

AHPRA Data Breach: “Classic Case of Systematic Regulatory Failure”

John Madigan, an independent senator in Australia told the newspaper, “While AHPRA is a classic case of systemic regulatory failure, unfortunately it is not unique. In recent times there has been an explosion in regulatory agencies of this type.”

These very unfortunate breaches reveal ongoing data security issues within the healthcare industry in any country. In this case it’s more serious than stealing information. This breach led to a physical attack upon a healthcare employee. There were signs that these kinds of breaches were possible as AHPRA noted in annual reports that resources were not sufficient for proper controls to patient data.

How Australia’s Largest Health Insurance Company Protects Data

The major gaps in data security practices in parts of the Australian healthcare system serve as a cautionary tale to any organization, in any industry, in any country. In my opinion, if you run any kind of organization – whether non-profit, government or corporate – you should be held accountable for any mishandling of sensitive personal information that leads to a data breach. It seems to me that the Australian government needs to enforce tighter regulatory compliance mandates that comprehensively cover their healthcare system, including watchdog groups like AHPRA.

Today’s news made me think of our customer Medibank, Australia’s largest provider of integrated health insurance and health solutions. Each day, Medibank employees must transfer up to 15GB of confidential healthcare files, a volume that is expanding by around 3GB per month. These files include patient policy records that must be transferred securely between Medibank’s sites and 15 external business partners.

Medibank needed to meet Australian government and Commonwealth regulations and policies, including the National Safety and Quality Health Service Standards and the Privacy Act 1988 as outlined by the Office of the Australian Information Commissioner (OAIC). The organization sought out a managed file transfer system to provide a better, more secure and regulated way to send files within the organization and beyond. They knew they needed tight security controls built-in including identity and access management, data loss protection and encryption controls to avoid a data breach. This all together would allow their IT team to manage, view, secure and control all file transfer activity through a single system.

Jason Atkinson, IT Claims & Product Team Lead for Medibank shared with us, “As a health organization handling large volumes of sensitive data, security and compliance were probably the biggest drivers behind this project. It was important to us that any solution not only had good security controls in operation but also excellent auditing capabilities.”

Medibank turned to our Australian partner DNA Connect to address their needs. The healthcare organization ultimately chose Ipswitch MOVEit managed file transfer software to radically decrease the time required to set up secure file transfers.

MOVEit passed the Medibank security team’s demanding requirements for end-to-end encryption and auditability with flying colors. After quickly deploying MOVEit, Medibank staff and business partners were able to gain full visibility, auditing and compliance with Australian laws and regulations.

I don’t see why any agency or healthcare organization couldn’t do the same thing as Medibank. Our product is not high-priced software from Big IT. It’s simple to deploy and use. Medibank’s innovative work to protect patient data is something that the entire Australian healthcare community can model for themselves to better protect personal information.

be prepared for enough network bandwidth

Predicting the future isn’t a perfect practice without a DeLorean. For support desks at SMBs, however, determining network bandwidth needs down the line is critical to sustained growth. The farmer’s almanac may not cover network environments, but with a few helpful tips, you’ll gain much-needed perspective on how to plan for current and eventual system requirements.

From the Inside

Lars Brennan has spent the last two decades in IT functions ranging from network admin to director of operations, and for a variety of SMBs. Currently serving as an IT consultant in northern Colorado, Brennan is special in that he’s found himself knee-deep in bandwidth limits as company growth takes on data that outpaces the capabilities of its original network. More importantly, he’s found considerable success having confronted these crises early on in his technical tenure.

“All too often I focused on the problem, rather than the circumstances surrounding it,” Lars said quickly when asked about these midsized network headaches. “When I began to explore the current network environment and the people behind its traffic, I learned some valuable things.” Lars hits the nail on the head here: Network bandwidth serves to foster communication between multiple business applications, most of which are driven by humans in front of keyboards. Understand how the individuals sitting next to you interact with the network, and you’ll find yourself with the intuition to not only address present concerns, but plan for future requirements based on which data points receive the most activity and why.

Human After All

The real beauty in approaching bandwidth planning — from a behavioral perspective, according to Brennan — is that you begin to see the nuances of each source of traffic. “When I would try to tackle bandwidth issues from a programmatic stance, I would simply go to my network monitor and compile an application list of top bandwidth abusers.” he says. “I’d then pinpoint non-core apps that could be throttled or better prioritized.” Lars goes on to describe that although this approach worked well enough in the beginning, he began to notice different users engaging with the same application in very different ways as the company grew.

“We would have a small group of developers using the exact same IDE, but for very different purposes. GUI guys would be using an insignificant amount [of bandwidth], whereas the ones testing data features would eat up more than we had allotted.” How come? Because not every department is an equal contributor to the data usage you’re measuring, even if 100 people are touching the same application.

This presents a unique opportunity to delegate network resources for a current environment’s future growth based on in-depth knowledge of the people who use your hardware and software. As such, one of the best ways to begin planning the often-turbulent requirements of SMB network bandwidth is to examine the tasks behind the applications that wind up spiking on your network monitor. Take a look at who’s using them, what they’re using it for and how future projects may change the answers to those first two issues.

A Cloudy Concept

Understanding the way in which a particular user consumes bandwidth — with respect to the software and hardware that goes with it — allows you to draw a detailed roadmap of your overall network requirements. It’s not the only view, though. Cloud computing can throw a major wrench into even the best network plans because you can’t always identify the source of activity in a drive that’s accessible off-site or on a separate device. In these instances, stick to what you know: Use your knowledge of user behavior to set up appropriate VLANs, which TechTarget suggests can better organize and isolate the most logical cloud segments.

In the end, a user-centric approach to network planning serves two important purposes: First, it gives you the environmental insight to build a detailed network bandwidth management plan. And second, it allows you to marry this knowledge with new potential projects and assets so you can better predict what your future network requirements will look like. In tandem with conventional capacity planning wisdom, you’ll be well equipped to provide a robust network that handles modern SMB challenges, however many users may be involved.

5 tips for small IT teams

IT spending is on track to hit $3.9 billion worldwide by 2019, according to Gartner. For support, however, more technology spending doesn’t always translate to more full-time employees, and outsourcing doesn’t necessarily balance your workload. If you work on a small IT team no doubt you’re feeling the crunch and not enough manpower. Here are five tips to help growing IT teams make ends meet.

1. Job Number One

According to Sébastien Baehni, VP of Engineering at end-user analytics company Nexthink, one of the biggest challenges facing smaller helpdesks is prioritizing tasks to ensure other employees “can work and do their day job.” The problem with that is it’s easy to get lost in “urgent” requests from the executives or by ongoing technical issues that leave other “very important” tasks on the back burner.

In addition to affecting production and throughput, leaving these issues unresolved opens the door for security vulnerabilities. If possible, take a deep breath and ask for user feedback. This lets you tackle the “low-hanging fruit” issues which, although they may generate thousands of reports and complaints, aren’t at the top of your urgent list. Often, however, they’re easy to eliminate and can clear space for more critical line-of-business (LOB) tickets.

2. Mind the Gap

There are more than 210,000 unfilled cybersecurity jobs in the United States, as noted by Infosecurity Magazine, and upwards of a million worldwide. So even if the approved budget includes a new hire (or two), you may still be unable to find the right candidate to fill the position. And while it may seem counterintuitive given your existing workload, the simplest way to address security concerns and get your department back on track is sending at least a few staff members for up-to-date security training. The result? You fix security holes rather than patching them with duct tape and hoping for the best.

3. All for One

Baehni also offers advice for IT teams looking for the most effective way to handle diverse task lists with limited staff. In his experience, an “all-for-one” approach — wherein teams work together to solve emergent issues and employees actively identify problems and solutions on their own — produces better results than “silos” or compartmentalization . It makes sense: What happens if your ‘ideal’ network expert gets sick or moves to another company? By diversifying talent and hiring people with the ‘agility, curiosity and intellectual honesty required to identify issues,’ it is possible to build a team of self-improving experts who collectively handle critical support tasks.

4. Overtime Opt-Out

Overtime is often a bone of contention for sysadmins. According to Fortune, for example, tech giant Amazon has come under fire for expecting big overtime commitments from employees, in some cases giving the eCommerce retailer an air of “inhuman meritocracy”. Beyond loss of focus and potential burnout associated with mandatory overtime, however, there’s the larger problem of “making things work.” If support teams are constantly taking on overtime just to complete basic tasks, executives get the sense that the understaffed model is a success since nothing’s actively falling apart. Sometimes, a little pressure is a good thing.

5. Embrace the Shadows

What happens when IT can’t keep up? Shadow IT emerges. According to Windows IT Pro, in fact, a survey of CIOs revealed that companies are spending between 5 and 15 percent of their budget managing shadow IT — money that could be better spent taking the pressure off you. The simplest route between shadow and light? Embrace popular tools and processes where possible, rather than fighting the battle on principle. You’ll find happier users and fewer security holes to patch.

Want to take the pressure off your team? Find ways to target what matters, work smarter not harder and leverage the right tools for the job.

Pen Testing

Pen testing (aka penetration testing) is an ongoing debate, but it’s also the subject of a great deal of misunderstanding. Talk about it with fellow sysadmins over lunch and you’re likely to hear a few different opinions on what it is and why you should or shouldn’t get involved. So, which is it: Do you need to be doing it, and if so, how often?

What It Is and What It Isn’t

A buddy over at your cloud supplier just told you penetration testing is the same as a vulnerability scan, whereas the helpdesk rep next to you says it’s a compliance audit. Your boss calls it a security assessment. They’re all wrong, and yet just a little bit right: Properly conducted pen testing will tell you what the real-world effectiveness of your existing security controls are when facing an active attack by a legit cybercriminal. The test doesn’t just find vulnerabilities; it tells you how big the holes are.

What Will Pen Testing Tell Me?

Properly performed, pen testing will at least:

  • Determine the feasibility of certain attack vectors
  • Assess the magnitude of operational impacts by successful attacks
  • Provide evidence that your department needs a bigger budget
  • Test the department’s ability to detect and defend against agile attackers
  • Identify vulnerabilities that a simple vulnerability scan or security assessment will miss
  • Help you meet industry compliance specifications such as PCI DSS and HIPAA

Is It Worth It?

Even a cheap, automated IP-based test isn’t cheap. The services and software that perform in-depth testing can be pretty expensive. When deciding how to go about this testing, you need to decide how important your company’s data and IP is, and what it’s worth. The average cost of a data breach to the company is estimated to be more than $3 million. The Target data breach in 2013? Earlier this year, the big-box retailer declared costs to be $162 million in 2013-2014, not including lost business and potential expenses incurred due to class-action lawsuits.

How Often Should Pen Testing Happen?

Those handling sensitive credit-card data are (or should be) well-versed in the Payment Card Industry Data Security Standard (PCI DSS). This standard actually requires that you perform pen testing annually, as well as after any system changes. Add to this list when end-user policies are changed, when a new office goes online and when security patches are installed — and you’ve got a solid idea of when a pen test should take place.

In-House or Farmed Out?

Although you may break out the toolbox when your car needs a belt or hose change, you shouldn’t be handling micrometers and a cylinder hone when the engine block needs decking. Take it to a professional so it’s done right. Pen testing follows the same principle. For instance, let’s say Acme Pen Testing is abundant online, charging as little as $50 for a report on your desk within a few days. But how reliable is that report? Not so much, especially when you’re stuck telling the C-suite that a quick review overlooked a vulnerabilty that lost company data. If you’re going to pen test in-house, you need people who are specifically trained in pen testing.

Evan Saez, a cyber-threat analyst for LIFARS, recommends using automated tools for in-depth penetration testing. Why? These are the same types of tools that attackers use. Evan recommends Metasploit for a number of reasons, but the main upside is that it has huge base of programmers who are constantly improving it. At the end of the day, the safest pen test has today’s standards in mind. Just make sure your cloud-based data is consistent with the same ones.

6 Pain Points You Can Avoid With Unified Infrastructure Monitoring

“The story of the blind men and an elephant originated in the Indian subcontinent from where it has widely diffused. It is a story of a group of blind men (or men in the dark) who touch an elephant to learn what it is like. Each one feels a different part, but only one part, such as the side or the tusk. They then compare notes and learn that they are in complete disagreement.” (Source: Wikipedia)

This parable rings true beyond the animal kingdom. Like in IT, for example. When unified monitoring tools are not part of the mix, sysadmins can’t see a full picture of their networks, systems and applications.

The advantages of a unified tool for full visibility could easily make a full switch worthwhile. TechTarget presents a typical use case: It’s a wireless access point that seems to be acting up, but the problem is actually in the wired subnet to which it’s connected. A technician could lose precious minutes logging into the WAP’s web portal only to find that a completely different tool would’ve localized the problem sooner.

That use case didn’t consider applications. Adding application performance management issues into the mix typically adds more tools into the diagnostic phase. Many more could be cited, but here are six pain points you can avoid when you’ve got unified monitoring tools in place:

1. Apps Stuck in a Network Traffic Jam

This is one of the most common challenges for any toolset that isn’t unified. Separating application performance degradation from high network traffic. Is your CRM application the culprit or might it be a problem lower in the stack?

2. Inability to Identify Sources of SLA Threshold Failures

Managing SLA terms can have heavy fiscal impacts in some organizations. And when multiple tools are needed to isolate the cause of a service-level drop, the time to resolve may increase.

3. Inability to Prioritize Alerts

Using many tools can lead to a profusion of false positives. These are especially pernicious amid security threats, which should be placed above capacity management and routine. SANS points out in the context of intrusion detection: “When you consider all the different things that can go wrong to cause a false positive, it is not surprising that false positives are one of the largest problems facing [implementers].”

4. One-Off Project Deployment and Routine Monitoring Tasks

There’s a temptation when using one set of tools to configure and test a new server cluster for deployment, and a different set for day-to-day monitoring. The result can be misleading alerts. Using a unified tool can gain visibility into both event families, potentially reducing noise and confusion.

5. Dissimilar Interfaces and Terminology Across Toolsets

This can interfere with expeditious problem resolution, even with trained personnel. When different managers use unique tools to solve different problems over time, your tools portfolio can get pretty overwhelming, and training budgets can become a luxury.

6. Difficulty Developing ‘Crime Scene Maps’

This term is popular with Cisco’s Denise Fishburn to characterize recurring problems that require tools to operate in tandem. Fishburn reminds IT teams that once a problem has been identified, “it’s time to improve (document, prevent/prepare/repair).” Producing useful shareable scripts — manual or automated — makes your job harder.

No Panaceas, but Unified Monitoring Suites Can Truly Be Sweet

An often-quoted truism said by former U.S. Secretary of Defense Donald Rumsfeld in a 2002 press conference reprised a risk management concept that originated earlier in NASA circles: “There are known knowns; there are things we know we know. We also know there are known unknowns. But there are also unknown unknowns — the ones we don’t know we don’t know. It is the latter category that tends to be the difficult ones.”

The underlying wisdom is generally thought to be sound and has appeared in some treatments of risk management, including those that consider the enterprise adoption of cloud services.

There’s a strong case to be made for unified monitoring solutions that tie together your network, application and infrastructure. Still, no single tool or set of tools can provide a 100-percent complete, real-time picture of everything happening on a complex network.

What tools can achieve as part of a unified monitoring system, though, is a reduction in the amount of “blindness” and “known unknowns.”

http://www.whatsupgold.com/products/whatsup-gold-ms-lync-performance-monitor.aspx

Do you get bogged down trying to both maintain sufficient performance across your Microsoft applications, while troubleshooting related problems as they happen? If so, here are seven tips that will help you manage your software from Redmond:

1: Don’t Try to Manage the Unknown

Ensuring optimal Microsoft application performance starts by automatically maintaining an up-to-date network and server inventory of hardware and software assets, physical connectivity, and configuration. This helps to truly understand what is being supported in your environment. Doing this will also save time identifying relationships between devices and applications, and piecing them together to see the big picture. You may even find discrepancies in application versions or patch levels within Exchange or IIS server farms. You can correct these by through discovery, mapping and documenting your assets.

2: Monitor the Whole Delivery Chain

There are multiple elements responsible for providing Microsoft services and application content to end-users. Take monitoring Lync, for example. Lync alone has:

  • A multi-tier architecture consisting of a Front-End Server at the core
  • SQL Database servers on the back-end
  • Edge Server to enable outside the firewall access
  • Mediation Server for VoIP
  • And more..

You get the idea. The same applies to any Web-based application. Like SharePoint on the front-end, middleware systems and back-end SQL databases, not to mention the underling network. Don’t take any shortcuts, monitor it all.

If any of these components in the application delivery chain underperform, your Microsoft applications will inevitably slow down and bring employee communications, productivity and business operations down with it.

3: Understand Dependencies within Applications

There’s nothing worse than receiving an alert storm when a problem is detected. It can take hours to sort out what has a red status, why it has that status, and whether it was a real problem or a false positive. It’s a waste of time and delays the root cause identification and resolution.

A far better solution is to monitor the entire application service as a whole. This includes IIS servers, SQL servers, physical and virtual servers and the underlying network. Identify monitoring capabilities that will discover and track end-to-end dependencies and suppress alerts (if a database is “down,” all related apps will also be “down”). This is also the foundation to build SLA monitoring strategies aligned with business goals. Read on to find out more.

4: Look for Tools That Can Go Deep

Application performance monitoring tools let you drill down from one unified view into the offending component to reduce triage
and troubleshooting to just minutes. Even if you are not a DBA, you should be able to quickly identify that SQL is the culprit. Plus, think about automatic corrective actions as part of your monitoring strategyto restore service levels faster.  This includes using Write Event Log, Run Scripts, Reboot, Active and PowerShell scripts. For example, Exchange and SQL are well-known for their
high memory consumption and high IOs, so you may want to automatically reboot them to avoid service disruptions for your users when exceeded memory reaches a problematic level.

5: Utilize Microsoft Application Monitoring Features

Use built-in application monitoring features that come with your Microsoft applications like Exchange, SharePoint, Lync, IIS, Dynamics, SQL and Windows. Or even some free tools. Every organization is different, so there really is no one size fits all approach to this. Look for pre-packaged monitoring with capabilities to easily tweak settings, so you can also monitor custom applications or more feature-rich applications.

6: Don’t Forget Wireless Bandwidth Monitoring

It is a wireless world out there, and BYOD continues to grow. Mobility has transformed wireless networks into business-critical assets that support employee connectivity, productivity and business Ops. For example, Microsoft corporate headquarters runs Lync over Aruba Wi-Fi. Just like you want a map of your wired assets, look for capabilities to automatically generate dynamic wireless maps — WLCs, APs and Clients — from the same single point of control.

7: Keep Stakeholders and Teams Regularly Updated

Your Microsoft applications may be the backbone of your business. Slowdowns, intermittent application performance problems or failures will drive escalations through the roof. Not to mention bringing productivity, Ops and even revenue to a halt. Customizable reporting
(by application, by servers, by location, etc.) and automatic email distribution capabilities (daily, weekly, monthly, etc.) will help to keep cross-functional team members and stakeholders in the know. Get in the habit of periodically analyzing all performance data to identify problematic trends early on, properly plan capacity, and justify investment on additional resources.

Maintaining network performance can sometimes feel like a gargantuan task, with issues seemingly coming out of nowhere. However, many of these unforeseen problems can actually be anticipated and avoided with the correct monitoring solutions in place.

it-vendors-and-solutionsWhen transitioning to a new solution, do IT vendors elicit a mix of anticipation and fear? That makes sense. You’re eager to see the new service hard at work, but simultaneously concerned it won’t live up to promised hype or deliver on promises made by the supplier.

Also, these transitions tend to cost a lot of money and resources, so a failed transisition usually doesn’t bode well to the decision-maker. Although no transition is foolproof, it’s worth running down the following support checklist. Have you covered all your bases, or is there more work to do before you take the plunge?

Have you researched other vendors?

The act of bringing in a new tech vendor is a lot like hiring a new employee. If you haven’t spent the time “interviewing” prospective providers and vetting their resumes, take a step back and do some more research.

Do they eat their own dogfood?

If an APM vendor is trying to sell you their monitoring tool, but you notice that their competitor’s tool is open in the background on their computer during a demo, that probably doesn’t instill confidence. Does your potential vendor use its own product or eschew it in favor of other solutions? Since you’re likely making the switch to a new service or technology, your new vendor should be prepared to demonstrate the same confidence in that offering.

If they don’t use it, ask why. If they do, ask for proof.

Is it a closed environment?

Is the technology interoperable with other offerings or are you compelled to use only what the vendor is selling? More so, what’s the plan when you switch providers or if the vendor goes out of business? Bottom line: If they’re locking you in, get out. The last thing you want to have is a broken legacy tool without any support. Unfortunately, it happens all the time so make sure you have a an option to get out.

Is there data to support their cause?

If you’re looking to link up with a new vendor, ask how they track customer needs and serve up effective solutions. The answer should be a brand of data analytics. If it’s a generalized “mission statement” about customization or best practices, take a pass. Hard data is critical to handle customer needs effectively.

Does it meet your needs or is it hype?

Does the product you’re considering really meet your needs? It’s easy to get caught in the hype trap and spring for something you don’t really need. Maybe a Magic Quadrant report convinced your boss it was the hot new ticket and they couldn’t turn it down. Instead, look for key characteristics such as single-pane-of-glass monitoring across physical and virtual servers as well as applications.

How does their licensing work?

How is licensing handled? Per-seat is the old standby, but it often serves to line vendors’ pockets rather than offering you any significant benefit. Consider shopping for a provider that offers per-device licensing to help manage costs and simplify the process of getting up to speed. Too often do vendors provide overly complicated licensing. If you can’t grasp how their licensing and pricing works then assume they did that on purpose.

Are they really trying to help you?

Whose success is your prospective partner focused on? While all IT vendors are in the market to make a healthy profit, they should have teams, systems and processes in place designed to assess your needs, measure your satisfaction and take action where warranted. If you get a “cog in the machine” or “check in the bank” vibe from your vendor, back away and find another offering.

Is their support adequate?

Support isn’t a static discipline. If you’re considering an agreement with a new provider, what kind of training and education is available to sysadmins down the road? If your vendor doesn’t offer this or even see the need, you may want to opt out.

Break It Down

It’s easy to talk generally about cost; you want to spend “X” and not exceed “Y”. Here’s the thing: You need a more concrete answer. Start with a decent cost calculator and see what shakes out. Refine as needed to find a bottom line that suits your needs and your budget.

All companies eventually move up, laterally or simply into a need for action to keep up with IT trends. Do your workload a favor: Run this checklist first, adjust as needed and then dive into your new investment.

network protocols

It’s obviously easy to tell when two humans are communicating with one another. It’s not as easy for some folks to get how two machines communicate with each other. They do. It’s just a less obvious. Hint: they don’t Snapchat. Instead, components within your IT infrastructure, like routers or applications, use network protocols to chat with each other.

Network protocols get kind of important when it comes to their sharing information about your company. When machines don’t communicate with each other properly, vital information is lost.

Moreover, network protocols alert sysadmins to the status of IT health and performance. If you’re not paying attention to what your network protocols are trying to tell you, devices on your network could be failing and you don’t know about it.

In order to better understand the importance of network protocols, you should become familiar with the ones which are most commonly used.

SNMP (Simple Network Management Protocol)

IT pros use SNMP to collect information as well as to configure network devices such as servers, printers, hubs, switches, and routers on an IP network. How does it work? You install an SNMP agent on a device. The SNMP agent allows you to monitor that device from an SNMP management console. SNMP’s developers designed this protocol so it could be deployed on the largest number of devices and so it would have minimal impact on them. Also, they developed SNMP so that it would continue to work even when other network applications fail.

WMI (Windows Management Instrumentation)

WMI is the Microsoft implementation of Web-Based Enterprise Management, a software industry initiative to develop a standard for accessing management information in the enterprise. This protocol creates an operating system interface that receives information from devices running a WMI agent. WMI gathers details about the operating system, hardware or software data, the status and properties of remote or local systems, configuration and security information, and process and services information. It then passes all of these details along to the network management software, which monitors network health, performance, and availability. Although WMI is a proprietary protocol for Windows-based systems and applications, it can work with SNMP and other protocols.

SSH (Secure Shell)

SSH is a UNIX-based command interface that allows a user to gain remote access to a computer. Network administrators use SSH to control devices remotely. SSH creates a protective “shell” through encryption so that information can travel between network management software and devices. In addition to the security measure of encryption, SSH requires IT administrators to provide a username, password, and port number for authentication.

Telnet

Telnet is one of the oldest communications protocols. Like SSH, it enables a user to control a device remotely. Unlike SSH, Telnet doesn’t use encryption. It’s been criticized for being less secure. In spite of that, people still use Telnet because there are some servers and network devices still require it.

Monitoring Your Infrastructure

Like almost every other IT team out there, yours probably is dealing with an infrastructure composed of a mish mash of servers, network equipment, mobile devices, and applications. Being able to automatically discover, manage and monitor this all requires unified infrastructure and application monitoring technology that uses all four of these protocols.

 

 

 

how it pros can save 30 minutes a day
Learn how to eliminate time wasters and get 30 minutes of your day back

Nobody knows the value of time better than an IT pro. Staying ahead of issues gives IT breathing room to enhance the network, instead of wasting time on fixing problems. 2016 is no different: Your IT team will need to once again deploy patches, install new hardware and transition to yet another upgraded Windows platform.

That’s right. The start of a new year always brings with it new challenges, but 2016 stands out as a year that could bring unforeseen complications following the release of Windows 10. Depending on your deployment plan, moving over to the latest incarnation of Windows is a massive additional project.

To make up for it, you and your team need to save 30 minutes a day this year. Our upcoming webinar on February 9th will hopefully help your team handle many of their core tasks quickly so they can concentrate on big things like the new Windows 10.

In our upcoming webinar, we’ll discuss how using WhatsUp Gold infrastructure monitoring software will enhance your team’s ability to:

  • Manage and track your entire inventory, down to the component level
  • Configure new or replaced devices
  • Create network diagrams and stay within any necessary compliance
  • Many other necessary and vital tasks that your team handles on a daily basis

Understanding how to save time on regular tasks represents a massive opportunity for time savings over the course of 2016.

Save Every Precious Second You’ve Got

WhatsUp Gold provides all of the visibility about your entire infrastructure that your team needs to reduce time spent on time-consuming tasks. IT administration is about managing a massive amount of tasks. Knowing this, we’ve designed software that can save every precious second you’ve got.

The webinar will show how WhatsUp Gold can become an IT pro’s best friend, including the ability to:

  • Create a single pane of glass to monitor the overall health of the entire technical infrastructure
  • Provide highly customizable alerts that allow for automated features to address certain tasks
  • Integrate with other WhatsUp Gold plug-ins to help create a specific solution for your IT administration
  • Increase the ease of device configuration, auditing and configuration management
  • Enhance the ability to comply with regulations and increase the ease of internal audits

Learn How to Avoid IT Time Wasters 

Efficiency is the name of the game in the world of IT. Our upcoming webinar on February 9 at 2pm US ET will provide actionable ways for IT pros to examine their workflows and save 30 minutes a day.

Ipswitch surveyed IT professionals across the globe and it turns out that data security and compliance are top challenges for IT teams in 2016.

How We Did It

Ipswitch polled 555 IT team members who work in companies across the globe with greater than 500 employees. We surveyed IT pros globally, partnering with Vanson Bourne in Europe, between October-November 2015 to learn about their File Transfer habits and goals.

Demographics

255 in the US and 300 in Europe (100 each UK, France and Germany)

Totals by industry:

  • Banking/finance 15%
  • Government 15%
  • Healthcare 16%
  • Manufacturing 10%
  • Insurance 6%
  • Retail 6%
  • Other (includes Technology, Consulting, Utilities/Energy, Construction, & others) 32%

2016 State of Data Security and Compliance Infographic

Click on the infographic to see full size. 

2016-ipswitch-state-of-data-security-and-compliance

Share this Image On Your Site

ftp-broncos
Ipswitch’s FTPS server gave the Broncos the defense they needed for protecting data in motion.

Data Security a Huge Issue for NFL Teams

After a season of highs and lows, the Denver Broncos are headed to Super Bowl 50 to face the Carolina Panthers. But teamwork, dedication and hard work aren’t the only things that contributed to the Broncos’ surge to the NFL’s championship game.

The amount of data generated by an NFL team is staggering. Besides statistics, plays, strategies and a crunch of information that would make some quarterbacks’ heads hurt, the business of running a professional sports team requires videos, photos and graphics to be distributed to special events, marketing and fans relations partners.

Because of email and private network restrictions, all of this data used to be downloaded to discs, thumb drives or hard drives. They would then be delivered by hand to players, coaches and other important members of the Broncos team.

WS_FTP is Broncos’ Choice for an FTPS Server

The franchise’s use of Ipswitch WS_FTP Server, a FTPS (file transfer protocol secure) server,  gave it a great defense for protecting data in motion. This data includes plays, high-definition videos, graphics and more to players, coaches and business partners. You could argue file transfer capabilities didn’t directly get the Broncos to the biggest game in football, but it certainly didn’t hurt.

But this process was time-consuming, inefficient and not to mention a huge data security risk. Ipswitch’s WS_FTP Server  came to the rescue the same way Brock Osweiler saved the day – or at least didn’t blow it – this past season when quarterback Peyton Manning missed some of the action with an injured foot.

Unlike Osweiler, who subbed for Manning only temporarily, WS_FTP Server was a permanent solution to the Broncos’ file transfer woes. WS_FTP Server is secure enough to keep confidential team information out of the wrong hands – some would unfairly imply out of the New England Patriots’ hands. It’s also powerful enough to handle the influx and growth of data, and gives ultimate visibility and control for top achievement.

Another key quality of WS_FTP Server is its uninterrupted service that increases uptime, availability and consistent performance with a failover configuration. Unlike the Microsoft Surface tablets that failed the Patriots during the recent AFC Conference Championship, WS_FTP Server won’t go down, or leave the Broncos’ files in limbo, unprotected and undelivered.

NFL Becoming a Technology-Driven Business

The NFL’s need for quality IT service goes beyond devices displaying plays and diagrams. File transfer played a role in the way football went from throwing a pig skin down a grassy field to being a technology-driven business.

By providing partners with just a username and password, transferring files is completed in just a few clicks. So before the Broncos head to Santa Clara for the big game, the team can rest easy knowing its files are secure and accessible by all players, coaches, team executives and business professionals keeping the team running smoothly.

Read the Ipswitch File Transfer Case Study: Denver Broncos

We’ll find out Sunday if the Broncos and Manning can beat the tough Panthers, if the commercials will make us laugh and if Beyoncé and Coldplay will dazzle with their halftime show. But one thing the Broncos and all Ipswitch customers will always be assured of is the success, security and compliance of WS_FTP Server file transfer solution.