Ipswitch Blog

Where IT Pros Go to Grow

Our Latest Posts

Pen Testing

Pen testing (aka penetration testing) is an ongoing debate, but it’s also the subject of a great deal of misunderstanding. Talk about it with fellow sysadmins over lunch and you’re likely to hear a few different opinions on what it is and why you should or shouldn’t get involved. So, which is it: Do you need to be doing it, and if so, how often?

What It Is and What It Isn’t

A buddy over at your cloud supplier just told you penetration testing is the same as a vulnerability scan, whereas the helpdesk rep next to you says it’s a compliance audit. Your boss calls it a security assessment. They’re all wrong, and yet just a little bit right: Properly conducted pen testing will tell you what the real-world effectiveness of your existing security controls are when facing an active attack by a legit cybercriminal. The test doesn’t just find vulnerabilities; it tells you how big the holes are.

What Will Pen Testing Tell Me?

Properly performed, pen testing will at least:

  • Determine the feasibility of certain attack vectors
  • Assess the magnitude of operational impacts by successful attacks
  • Provide evidence that your department needs a bigger budget
  • Test the department’s ability to detect and defend against agile attackers
  • Identify vulnerabilities that a simple vulnerability scan or security assessment will miss
  • Help you meet industry compliance specifications such as PCI DSS and HIPAA

Is It Worth It?

Even a cheap, automated IP-based test isn’t cheap. The services and software that perform in-depth testing can be pretty expensive. When deciding how to go about this testing, you need to decide how important your company’s data and IP is, and what it’s worth. The average cost of a data breach to the company is estimated to be more than $3 million. The Target data breach in 2013? Earlier this year, the big-box retailer declared costs to be $162 million in 2013-2014, not including lost business and potential expenses incurred due to class-action lawsuits.

How Often Should Pen Testing Happen?

Those handling sensitive credit-card data are (or should be) well-versed in the Payment Card Industry Data Security Standard (PCI DSS). This standard actually requires that you perform pen testing annually, as well as after any system changes. Add to this list when end-user policies are changed, when a new office goes online and when security patches are installed — and you’ve got a solid idea of when a pen test should take place.

In-House or Farmed Out?

Although you may break out the toolbox when your car needs a belt or hose change, you shouldn’t be handling micrometers and a cylinder hone when the engine block needs decking. Take it to a professional so it’s done right. Pen testing follows the same principle. For instance, let’s say Acme Pen Testing is abundant online, charging as little as $50 for a report on your desk within a few days. But how reliable is that report? Not so much, especially when you’re stuck telling the C-suite that a quick review overlooked a vulnerabilty that lost company data. If you’re going to pen test in-house, you need people who are specifically trained in pen testing.

Evan Saez, a cyber-threat analyst for LIFARS, recommends using automated tools for in-depth penetration testing. Why? These are the same types of tools that attackers use. Evan recommends Metasploit for a number of reasons, but the main upside is that it has huge base of programmers who are constantly improving it. At the end of the day, the safest pen test has today’s standards in mind. Just make sure your cloud-based data is consistent with the same ones.

6 Pain Points You Can Avoid With Unified Infrastructure Monitoring

“The story of the blind men and an elephant originated in the Indian subcontinent from where it has widely diffused. It is a story of a group of blind men (or men in the dark) who touch an elephant to learn what it is like. Each one feels a different part, but only one part, such as the side or the tusk. They then compare notes and learn that they are in complete disagreement.” (Source: Wikipedia)

This parable rings true beyond the animal kingdom. Like in IT, for example. When unified monitoring tools are not part of the mix, sysadmins can’t see a full picture of their networks, systems and applications.

The advantages of a unified tool for full visibility could easily make a full switch worthwhile. TechTarget presents a typical use case: It’s a wireless access point that seems to be acting up, but the problem is actually in the wired subnet to which it’s connected. A technician could lose precious minutes logging into the WAP’s web portal only to find that a completely different tool would’ve localized the problem sooner.

That use case didn’t consider applications. Adding application performance management issues into the mix typically adds more tools into the diagnostic phase. Many more could be cited, but here are six pain points you can avoid when you’ve got unified monitoring tools in place:

1. Apps Stuck in a Network Traffic Jam

This is one of the most common challenges for any toolset that isn’t unified. Separating application performance degradation from high network traffic. Is your CRM application the culprit or might it be a problem lower in the stack?

2. Inability to Identify Sources of SLA Threshold Failures

Managing SLA terms can have heavy fiscal impacts in some organizations. And when multiple tools are needed to isolate the cause of a service-level drop, the time to resolve may increase.

3. Inability to Prioritize Alerts

Using many tools can lead to a profusion of false positives. These are especially pernicious amid security threats, which should be placed above capacity management and routine. SANS points out in the context of intrusion detection: “When you consider all the different things that can go wrong to cause a false positive, it is not surprising that false positives are one of the largest problems facing [implementers].”

4. One-Off Project Deployment and Routine Monitoring Tasks

There’s a temptation when using one set of tools to configure and test a new server cluster for deployment, and a different set for day-to-day monitoring. The result can be misleading alerts. Using a unified tool can gain visibility into both event families, potentially reducing noise and confusion.

5. Dissimilar Interfaces and Terminology Across Toolsets

This can interfere with expeditious problem resolution, even with trained personnel. When different managers use unique tools to solve different problems over time, your tools portfolio can get pretty overwhelming, and training budgets can become a luxury.

6. Difficulty Developing ‘Crime Scene Maps’

This term is popular with Cisco’s Denise Fishburn to characterize recurring problems that require tools to operate in tandem. Fishburn reminds IT teams that once a problem has been identified, “it’s time to improve (document, prevent/prepare/repair).” Producing useful shareable scripts — manual or automated — makes your job harder.

No Panaceas, but Unified Monitoring Suites Can Truly Be Sweet

An often-quoted truism said by former U.S. Secretary of Defense Donald Rumsfeld in a 2002 press conference reprised a risk management concept that originated earlier in NASA circles: “There are known knowns; there are things we know we know. We also know there are known unknowns. But there are also unknown unknowns — the ones we don’t know we don’t know. It is the latter category that tends to be the difficult ones.”

The underlying wisdom is generally thought to be sound and has appeared in some treatments of risk management, including those that consider the enterprise adoption of cloud services.

There’s a strong case to be made for unified monitoring solutions that tie together your network, application and infrastructure. Still, no single tool or set of tools can provide a 100-percent complete, real-time picture of everything happening on a complex network.

What tools can achieve as part of a unified monitoring system, though, is a reduction in the amount of “blindness” and “known unknowns.”

http://www.whatsupgold.com/products/whatsup-gold-ms-lync-performance-monitor.aspx

Do you get bogged down trying to both maintain sufficient performance across your Microsoft applications, while troubleshooting related problems as they happen? If so, here are seven tips that will help you manage your software from Redmond:

1: Don’t Try to Manage the Unknown

Ensuring optimal Microsoft application performance starts by automatically maintaining an up-to-date network and server inventory of hardware and software assets, physical connectivity, and configuration. This helps to truly understand what is being supported in your environment. Doing this will also save time identifying relationships between devices and applications, and piecing them together to see the big picture. You may even find discrepancies in application versions or patch levels within Exchange or IIS server farms. You can correct these by through discovery, mapping and documenting your assets.

2: Monitor the Whole Delivery Chain

There are multiple elements responsible for providing Microsoft services and application content to end-users. Take monitoring Lync, for example. Lync alone has:

  • A multi-tier architecture consisting of a Front-End Server at the core
  • SQL Database servers on the back-end
  • Edge Server to enable outside the firewall access
  • Mediation Server for VoIP
  • And more..

You get the idea. The same applies to any Web-based application. Like SharePoint on the front-end, middleware systems and back-end SQL databases, not to mention the underling network. Don’t take any shortcuts, monitor it all.

If any of these components in the application delivery chain underperform, your Microsoft applications will inevitably slow down and bring employee communications, productivity and business operations down with it.

3: Understand Dependencies within Applications

There’s nothing worse than receiving an alert storm when a problem is detected. It can take hours to sort out what has a red status, why it has that status, and whether it was a real problem or a false positive. It’s a waste of time and delays the root cause identification and resolution.

A far better solution is to monitor the entire application service as a whole. This includes IIS servers, SQL servers, physical and virtual servers and the underlying network. Identify monitoring capabilities that will discover and track end-to-end dependencies and suppress alerts (if a database is “down,” all related apps will also be “down”). This is also the foundation to build SLA monitoring strategies aligned with business goals. Read on to find out more.

4: Look for Tools That Can Go Deep

Application performance monitoring tools let you drill down from one unified view into the offending component to reduce triage
and troubleshooting to just minutes. Even if you are not a DBA, you should be able to quickly identify that SQL is the culprit. Plus, think about automatic corrective actions as part of your monitoring strategyto restore service levels faster.  This includes using Write Event Log, Run Scripts, Reboot, Active and PowerShell scripts. For example, Exchange and SQL are well-known for their
high memory consumption and high IOs, so you may want to automatically reboot them to avoid service disruptions for your users when exceeded memory reaches a problematic level.

5: Utilize Microsoft Application Monitoring Features

Use built-in application monitoring features that come with your Microsoft applications like Exchange, SharePoint, Lync, IIS, Dynamics, SQL and Windows. Or even some free tools. Every organization is different, so there really is no one size fits all approach to this. Look for pre-packaged monitoring with capabilities to easily tweak settings, so you can also monitor custom applications or more feature-rich applications.

6: Don’t Forget Wireless Bandwidth Monitoring

It is a wireless world out there, and BYOD continues to grow. Mobility has transformed wireless networks into business-critical assets that support employee connectivity, productivity and business Ops. For example, Microsoft corporate headquarters runs Lync over Aruba Wi-Fi. Just like you want a map of your wired assets, look for capabilities to automatically generate dynamic wireless maps — WLCs, APs and Clients — from the same single point of control.

7: Keep Stakeholders and Teams Regularly Updated

Your Microsoft applications may be the backbone of your business. Slowdowns, intermittent application performance problems or failures will drive escalations through the roof. Not to mention bringing productivity, Ops and even revenue to a halt. Customizable reporting
(by application, by servers, by location, etc.) and automatic email distribution capabilities (daily, weekly, monthly, etc.) will help to keep cross-functional team members and stakeholders in the know. Get in the habit of periodically analyzing all performance data to identify problematic trends early on, properly plan capacity, and justify investment on additional resources.

Maintaining network performance can sometimes feel like a gargantuan task, with issues seemingly coming out of nowhere. However, many of these unforeseen problems can actually be anticipated and avoided with the correct monitoring solutions in place.

it-vendors-and-solutionsWhen transitioning to a new solution, do IT vendors elicit a mix of anticipation and fear? That makes sense. You’re eager to see the new service hard at work, but simultaneously concerned it won’t live up to promised hype or deliver on promises made by the supplier.

Also, these transitions tend to cost a lot of money and resources, so a failed transisition usually doesn’t bode well to the decision-maker. Although no transition is foolproof, it’s worth running down the following support checklist. Have you covered all your bases, or is there more work to do before you take the plunge?

Have you researched other vendors?

The act of bringing in a new tech vendor is a lot like hiring a new employee. If you haven’t spent the time “interviewing” prospective providers and vetting their resumes, take a step back and do some more research.

Do they eat their own dogfood?

If an APM vendor is trying to sell you their monitoring tool, but you notice that their competitor’s tool is open in the background on their computer during a demo, that probably doesn’t instill confidence. Does your potential vendor use its own product or eschew it in favor of other solutions? Since you’re likely making the switch to a new service or technology, your new vendor should be prepared to demonstrate the same confidence in that offering.

If they don’t use it, ask why. If they do, ask for proof.

Is it a closed environment?

Is the technology interoperable with other offerings or are you compelled to use only what the vendor is selling? More so, what’s the plan when you switch providers or if the vendor goes out of business? Bottom line: If they’re locking you in, get out. The last thing you want to have is a broken legacy tool without any support. Unfortunately, it happens all the time so make sure you have a an option to get out.

Is there data to support their cause?

If you’re looking to link up with a new vendor, ask how they track customer needs and serve up effective solutions. The answer should be a brand of data analytics. If it’s a generalized “mission statement” about customization or best practices, take a pass. Hard data is critical to handle customer needs effectively.

Does it meet your needs or is it hype?

Does the product you’re considering really meet your needs? It’s easy to get caught in the hype trap and spring for something you don’t really need. Maybe a Magic Quadrant report convinced your boss it was the hot new ticket and they couldn’t turn it down. Instead, look for key characteristics such as single-pane-of-glass monitoring across physical and virtual servers as well as applications.

How does their licensing work?

How is licensing handled? Per-seat is the old standby, but it often serves to line vendors’ pockets rather than offering you any significant benefit. Consider shopping for a provider that offers per-device licensing to help manage costs and simplify the process of getting up to speed. Too often do vendors provide overly complicated licensing. If you can’t grasp how their licensing and pricing works then assume they did that on purpose.

Are they really trying to help you?

Whose success is your prospective partner focused on? While all IT vendors are in the market to make a healthy profit, they should have teams, systems and processes in place designed to assess your needs, measure your satisfaction and take action where warranted. If you get a “cog in the machine” or “check in the bank” vibe from your vendor, back away and find another offering.

Is their support adequate?

Support isn’t a static discipline. If you’re considering an agreement with a new provider, what kind of training and education is available to sysadmins down the road? If your vendor doesn’t offer this or even see the need, you may want to opt out.

Break It Down

It’s easy to talk generally about cost; you want to spend “X” and not exceed “Y”. Here’s the thing: You need a more concrete answer. Start with a decent cost calculator and see what shakes out. Refine as needed to find a bottom line that suits your needs and your budget.

All companies eventually move up, laterally or simply into a need for action to keep up with IT trends. Do your workload a favor: Run this checklist first, adjust as needed and then dive into your new investment.

network protocols

It’s obviously easy to tell when two humans are communicating with one another. It’s not as easy for some folks to get how two machines communicate with each other. They do. It’s just a less obvious. Hint: they don’t Snapchat. Instead, components within your IT infrastructure, like routers or applications, use network protocols to chat with each other.

Network protocols get kind of important when it comes to their sharing information about your company. When machines don’t communicate with each other properly, vital information is lost.

Moreover, network protocols alert sysadmins to the status of IT health and performance. If you’re not paying attention to what your network protocols are trying to tell you, devices on your network could be failing and you don’t know about it.

In order to better understand the importance of network protocols, you should become familiar with the ones which are most commonly used.

SNMP (Simple Network Management Protocol)

IT pros use SNMP to collect information as well as to configure network devices such as servers, printers, hubs, switches, and routers on an IP network. How does it work? You install an SNMP agent on a device. The SNMP agent allows you to monitor that device from an SNMP management console. SNMP’s developers designed this protocol so it could be deployed on the largest number of devices and so it would have minimal impact on them. Also, they developed SNMP so that it would continue to work even when other network applications fail.

WMI (Windows Management Instrumentation)

WMI is the Microsoft implementation of Web-Based Enterprise Management, a software industry initiative to develop a standard for accessing management information in the enterprise. This protocol creates an operating system interface that receives information from devices running a WMI agent. WMI gathers details about the operating system, hardware or software data, the status and properties of remote or local systems, configuration and security information, and process and services information. It then passes all of these details along to the network management software, which monitors network health, performance, and availability. Although WMI is a proprietary protocol for Windows-based systems and applications, it can work with SNMP and other protocols.

SSH (Secure Shell)

SSH is a UNIX-based command interface that allows a user to gain remote access to a computer. Network administrators use SSH to control devices remotely. SSH creates a protective “shell” through encryption so that information can travel between network management software and devices. In addition to the security measure of encryption, SSH requires IT administrators to provide a username, password, and port number for authentication.

Telnet

Telnet is one of the oldest communications protocols. Like SSH, it enables a user to control a device remotely. Unlike SSH, Telnet doesn’t use encryption. It’s been criticized for being less secure. In spite of that, people still use Telnet because there are some servers and network devices still require it.

Monitoring Your Infrastructure

Like almost every other IT team out there, yours probably is dealing with an infrastructure composed of a mish mash of servers, network equipment, mobile devices, and applications. Being able to automatically discover, manage and monitor this all requires unified infrastructure and application monitoring technology that uses all four of these protocols.

 

 

 

how it pros can save 30 minutes a day
Learn how to eliminate time wasters and get 30 minutes of your day back

Nobody knows the value of time better than an IT pro. Staying ahead of issues gives IT breathing room to enhance the network, instead of wasting time on fixing problems. 2016 is no different: Your IT team will need to once again deploy patches, install new hardware and transition to yet another upgraded Windows platform.

That’s right. The start of a new year always brings with it new challenges, but 2016 stands out as a year that could bring unforeseen complications following the release of Windows 10. Depending on your deployment plan, moving over to the latest incarnation of Windows is a massive additional project.

To make up for it, you and your team need to save 30 minutes a day this year. Our upcoming webinar on February 9th will hopefully help your team handle many of their core tasks quickly so they can concentrate on big things like the new Windows 10.

In our upcoming webinar, we’ll discuss how using WhatsUp Gold infrastructure monitoring software will enhance your team’s ability to:

  • Manage and track your entire inventory, down to the component level
  • Configure new or replaced devices
  • Create network diagrams and stay within any necessary compliance
  • Many other necessary and vital tasks that your team handles on a daily basis

Understanding how to save time on regular tasks represents a massive opportunity for time savings over the course of 2016.

Save Every Precious Second You’ve Got

WhatsUp Gold provides all of the visibility about your entire infrastructure that your team needs to reduce time spent on time-consuming tasks. IT administration is about managing a massive amount of tasks. Knowing this, we’ve designed software that can save every precious second you’ve got.

The webinar will show how WhatsUp Gold can become an IT pro’s best friend, including the ability to:

  • Create a single pane of glass to monitor the overall health of the entire technical infrastructure
  • Provide highly customizable alerts that allow for automated features to address certain tasks
  • Integrate with other WhatsUp Gold plug-ins to help create a specific solution for your IT administration
  • Increase the ease of device configuration, auditing and configuration management
  • Enhance the ability to comply with regulations and increase the ease of internal audits

Learn How to Avoid IT Time Wasters 

Efficiency is the name of the game in the world of IT. Our upcoming webinar on February 9 at 2pm US ET will provide actionable ways for IT pros to examine their workflows and save 30 minutes a day.

Ipswitch surveyed IT professionals across the globe and it turns out that data security and compliance are top challenges for IT teams in 2016.

How We Did It

Ipswitch polled 555 IT team members who work in companies across the globe with greater than 500 employees. We surveyed IT pros globally, partnering with Vanson Bourne in Europe, between October-November 2015 to learn about their File Transfer habits and goals.

Demographics

255 in the US and 300 in Europe (100 each UK, France and Germany)

Totals by industry:

  • Banking/finance 15%
  • Government 15%
  • Healthcare 16%
  • Manufacturing 10%
  • Insurance 6%
  • Retail 6%
  • Other (includes Technology, Consulting, Utilities/Energy, Construction, & others) 32%

2016 State of Data Security and Compliance Infographic

Click on the infographic to see full size. 

2016-ipswitch-state-of-data-security-and-compliance

Share this Image On Your Site

ftp-broncos
Ipswitch’s FTPS server gave the Broncos the defense they needed for protecting data in motion.

Data Security a Huge Issue for NFL Teams

After a season of highs and lows, the Denver Broncos are headed to Super Bowl 50 to face the Carolina Panthers. But teamwork, dedication and hard work aren’t the only things that contributed to the Broncos’ surge to the NFL’s championship game.

The amount of data generated by an NFL team is staggering. Besides statistics, plays, strategies and a crunch of information that would make some quarterbacks’ heads hurt, the business of running a professional sports team requires videos, photos and graphics to be distributed to special events, marketing and fans relations partners.

Because of email and private network restrictions, all of this data used to be downloaded to discs, thumb drives or hard drives. They would then be delivered by hand to players, coaches and other important members of the Broncos team.

WS_FTP is Broncos’ Choice for an FTPS Server

The franchise’s use of Ipswitch WS_FTP Server, a FTPS (file transfer protocol secure) server,  gave it a great defense for protecting data in motion. This data includes plays, high-definition videos, graphics and more to players, coaches and business partners. You could argue file transfer capabilities didn’t directly get the Broncos to the biggest game in football, but it certainly didn’t hurt.

But this process was time-consuming, inefficient and not to mention a huge data security risk. Ipswitch’s WS_FTP Server  came to the rescue the same way Brock Osweiler saved the day – or at least didn’t blow it – this past season when quarterback Peyton Manning missed some of the action with an injured foot.

Unlike Osweiler, who subbed for Manning only temporarily, WS_FTP Server was a permanent solution to the Broncos’ file transfer woes. WS_FTP Server is secure enough to keep confidential team information out of the wrong hands – some would unfairly imply out of the New England Patriots’ hands. It’s also powerful enough to handle the influx and growth of data, and gives ultimate visibility and control for top achievement.

Another key quality of WS_FTP Server is its uninterrupted service that increases uptime, availability and consistent performance with a failover configuration. Unlike the Microsoft Surface tablets that failed the Patriots during the recent AFC Conference Championship, WS_FTP Server won’t go down, or leave the Broncos’ files in limbo, unprotected and undelivered.

NFL Becoming a Technology-Driven Business

The NFL’s need for quality IT service goes beyond devices displaying plays and diagrams. File transfer played a role in the way football went from throwing a pig skin down a grassy field to being a technology-driven business.

By providing partners with just a username and password, transferring files is completed in just a few clicks. So before the Broncos head to Santa Clara for the big game, the team can rest easy knowing its files are secure and accessible by all players, coaches, team executives and business professionals keeping the team running smoothly.

Read the Ipswitch File Transfer Case Study: Denver Broncos

We’ll find out Sunday if the Broncos and Manning can beat the tough Panthers, if the commercials will make us laugh and if Beyoncé and Coldplay will dazzle with their halftime show. But one thing the Broncos and all Ipswitch customers will always be assured of is the success, security and compliance of WS_FTP Server file transfer solution.

 

Is There Such a Thing as too Much Visibility?
Sometimes broad visibility can make it hard to see

Every day, many of us commuters have visibility issues and are at the mercy of unpredictable traffic. Often I have to leave a LOT of buffer to get to work in case I had an important meeting. Luckily there are tools out there that can give me good traffic visibility at my fingertips. For instance, I rely almost entirely on Google Maps to “predict” how long is it going to take me to get to work or to any other place for that matter. This type of visibility is crucial when things go wrong, like an accident.  Google Maps would reroute me or at least give me a revised ETA so that I can make any adjustments.

Fix Before You Fail

Is there an analogy to this in the online world? You would think that service providers and large enterprises would have this level of visibility into their networks. So, when things go wrong, like a device failure, they can pin point the root cause right away and take corrective action. Better yet, they can be ahead of the game by watching any performance bottlenecks or warning signs of failures and fix the issues before the end users are affected.

BT Broadband Network Outage is a Lesson for SMBs

So, when the very large BT broadband network went down today, I wonder if there is such a thing as too much visibility. Despite the service provider level of visibility they have, it took BT almost two hours to get all their customers back on line. Now imagine if you were an SMB or a mid-sized organization faced with a similar outage. Without sufficient visibility into the problem, your network could be down for hours, costing you, your employees, and customers, significantly in terms of revenues, productivity, and reputation.

How can today’s SMBs get service provider level visibility that won’t break the bank?  Here are some pointers:

  • Invest in a network monitoring tool that can discover all of your critical infrastructure
  • Make sure that the tool can provide insight into availability, performance, and security of your infrastructure
  • Choose a tool that is broad enough to support multiple monitoring technologies (e.g. SNMP, WMI, network flows) your entire infrastructure (network devices, servers, wireless devices, applications, virtual machines, etc.)
  • Ensure that the tool can give you proactive insights as well as reactive alerts
  • Consider the total cost of ownership of the tool from when you deploy it to as you maintain it. Remember that DIY is not always free over the lifetime of owning the tool
  • Do not “under monitor” during the evaluation of the tool. Develop a monitoring configuration that reflects the entire production network, not just the subset suitable during the evaluation

mobile-device-hackerDid you know your mobile phone and wearables are just as appealing to hackers as your online bank account? No one is impervious to increasingly sophisticated mobile device hacking. Case in point, James Clapper, the U.S. director of national intelligence (DNI), had his phone hacked last month with calls rerouted to the Free Palestine Movement. And in October 2015, CIA director John Brennan’s mobile device fell victim to the activity of a group of “pot-smoking teenagers.” Bottom line? Not even next-gen hardware is completely safe.

So long as support enforces two-factor authentication and staff doesn’t access free Wi-Fi hotspots (especially when handling business data), a mobile phone should be safe, right? Nope. As noted by Dialed In and Wired, determined hackers do a lot more with your mobile and wearable technology than you may realize.

Mobile Phones: Hackers’ Best Friend

Any iPhone newer than the 4 comes with a high-quality accelerometer, or “tilt sensor.” If hackers access this sensor and you leave the phone on your desk, it is possible for them to both detect and analyze the vibration of your computer keyboard and determine what you’re typing, with 80 percent accuracy. So, say you type in the address of your favorite banking web portal and then your login credentials; hackers now have total access.

App developers have wised up to hackers targeting microphones and made it much more difficult to gain access without getting caught. Enterprising criminals, however, have discovered a way to tap a phone’s gyroscope. This lets the user play Angry Birds or any other orientation-based program and detect sound waves through it. So, next time you talk about finances with your significant other while three-starring a new level in your go-to mobile game, you may also be giving hackers the information they need to steal from you.

Targeting RFID Chips

In an effort to make retail purchases easier and more secure, many credit cards come equipped with RFID chips. Smartphones, meanwhile, include near-field communication (NFC) technology that allows them to transmit and receive that RFID data. The risk, here, is that hackers who manage to compromise your phone can leverage malware to read the information from a card’s RFID chip if you’re storing it in a nearby wallet or card-carrying mobile case. Then they can make a physical copy. You’re defrauded and don’t even know it.

“Say Cheese”

Mobile cameras have also come under scrutiny, since hacking this feature lets attackers take snaps of you or your family whenever and wherever they want. Despite improvements in basic phone security, though, it’s still possible for malicious users to take control of your camera. It goes like this: Operating systems like Android now mandate that a preview of any new photograph must be displayed on-screen, but don’t specify the size of this image. As a result, cybercriminals can take surreptitious photographs and then send them to anyone at any location.

MDM Leads to Risk

A large number of smartphones contain weak mobile device management (MDM) tools installed by carriers. And although reaching these tools in a target phone requires close proximity and the use of rogue base stations or femtocells, the risk is substantial. Attackers could take total control of your device.

Fit or Foul?

Mobile phones aren’t the only technology at risk; wearables are also open to attack. What can hackers do to these devices? Back in March 2015, wearable maker Fitbit was notified by researchers that their device could be hacked in fewer than 10 seconds. While initial reports focused on logical changes such as altering steps taken or distance walked, as noted by The Hacker News, it wasn’t long before hackers discovered a way to inject malware that potentially spreads to all synced devices.

Potentially Lethal Consequences

Security flaws in wireless-enabled pacemakers could allow hackers to take control of (and then stop) this critical device as well. In September 2015, a team from the University of Southern Alabama managed to access a functioning pacemaker and “kill” a medical mannequin attached to the device.

Medical devices such as insulin pumps and implantable defibrillators have notoriously weak security — a lack of encryption and weak or default passwords, in particular — of which cybercriminals can easily take control. The result? Delivering a fatal drug overdose or shocking perfectly healthy patients without warning.

Be Diligent About Mobile Security

The lion’s share of existing security issues stem from poor app development in mobile and wearable devices. Mobile device developers prioritize speed over security and eschew critical features such as encrypted commands, limited application sessions and disabling repeat requests. And while recognizing these flaws is the first step to improving mobile safety, users need to be aware of today’s risk factors. Right now, hackers can do far more with a mobile or wearable than the user may realize.

In the early years of IT, data was stored on paper tapes

What did an IT position look like in the ’70s, ’80s and ’90s? Far fewer mobile endpoints, for one thing. With respect to today, the history of information technology boasts some surprising differences in day-to-day tasks and the technology that was available. IT support has come a long way, folks.

How Far Back?

IT has been around almost as long as humans. If you think about it, hieroglyphics are just a script devs don’t use anymore. Mechanical devices such as the slide rule, the Difference Engine, Blaise Pascal’s Pascaline and other mechanical computers qualify as IT, too. But this particular journey begins well into the 20th century.

The 1970’s: Mainly Mainframes

Computers of this era were mostly mainframes and minicomputers, and a history of information technology wouldn’t be complete without mentioning them. IT job roles included manually running user batch tasks, performing printer backups, conducting system upgrades via lengthy procedures, keeping terminals stocked with paper and swapping out blown tubes. IT staff was relegated mainly to basements and other clean rooms that housed the big iron. System interconnectivity was minimal at the time, so people had to bridge those gaps themselves. This was the motivation behind the Internet (or the ARPANET, as it was known then).

The 1980’s: Say Hello to the PC

This decade saw the growth of the minicomputer (think DEC VAX computers) and the introduction of the PC. Sysadmins crawled out of the basement and into the hallways and computer rooms of schools, libraries and businesses that needed them onsite. The typical IT roles at this time consisted of installing and maintaining file and print servers to automate data storage, retrieval and printing. Other business roles included installing and upgrading DOS on PCs.

If you worked in a school, you saw the introduction of the Apple II, Commodore 64 and, eventually, the IBM PC. But the personal computer was more expensive, deemed for business use and not deployed in schools very much. It was the Apple II that propelled the education market forward and, if you worked support at a school in the ’80s, you knew all about floppy disks, daisy wheel printers and RS-232 cables.

The 1990’s: Cubicles, Windows and the Internet

This generation of IT worked in cubicles (think “Tron” or “Office Space“), often sharing that space alongside the users they supported. Most employees were using PCs with Windows by this time, and IT support was focused on networking, network maintenance, PC email support, Windows and Microsoft Office installations — and adding memory or graphics cards for those who needed them.

Toward the end of the decade, the Web’s contribution to Internet connectivity became arguably the most requested computing resource among growing businesses. Although there was no Facebook, Twitter or LinkedIn yet (Friendster would kick off that trend in 2002), employers still worried about productivity and often limited Web access. Oh, and if you could go ahead and add modems to PCs, run phone lines for those who needed dial-up access and Internet-enable the business LAN, that would be great.

Today’s IT: Welcome to Apple, Patch Tuesday and BYOD

Today, recent IT job roles have included the rebirth of Mac support, the introduction of social media (and the blocking of its access at work), constant security patches (Patch Tuesday on Windows, for instance), the advent of BYOD and DevOps automation.

The continued consumerization of IT (essentially now BYOD) meant that IT pros had “that kind” of job where friends and family would ask for help without pause. The one common thread through the years? The growth of automation in the IT role — something that will continue to define tomorrow’s helpdesk.

Image source: Wikimedia Commons

how it pros can navigate through a job interview
What can you do to make the IT job interview go well?

You’ve landed an IT job interview. That’s the good news. Now you have the interview itself, and let’s be honest, it’s never fun. Most candidates don’t like putting on a show of the software and protocols they’re familiar with. Even actors aren’t in love with auditioning. The “social” aspect of recruitment isn’t something you should need to ace for an admin position, but it has to be done.

If the job is a really good one — the technical work that’ll challenge your current support acumen (and compensate you well for the weekend maintenance) — you probably have a bit of an imposter complex even just applying. When the “ideal candidate” is an infosec wizard, how dare you present yourself? But hey, you believe you can do it, and the pay is great. So read that magazine and wait to be met.

Find Strengths in Technical Weaknesses

What can you do to make the IT job interview go well? Some things should be no-brainers, but there’s a reason think pieces keep pounding them into your head (present article excluded). Don’t be “creepy” with company research, advises InformationWeek, and don’t dress for the beach unless an offbeat SMB suggests otherwise. Do pay attention to the job description, though (don’t ask questions it already answered), and learn enough about the employer to imply a healthy interest.

Ultimately, play to your strengths. Lawyers have a saying: If the facts are against you, argue the law; if the law is against you, argue the facts. If you don’t have hands-on experience in data center migration, stress your credentials in bandwidth control during this process. Show that you know what’s involved in secure file transfers even if you haven’t managed them offsite. If your formal credentials are thin, play up your experience in the network trenches during the Super Bowl traffic spike.

Be Mindful of the Interviewers Who Don’t Work in IT

With luck, your interview with an IT rep will find some common ground. There may be scripts you’re both comfortable reading or security issues you should both be following. This will give you the chance to talk like a human as well as what the job will involve. One of the bigger challenges of an IT job interview, however, is that you may also meet someone from the business side. This guy knows only vaguely what network monitoring tools are and is probably a bit intimidated by the idea of bandwidth or network latency. In other words, they probably feel like the imposter, interviewing someone for a seat in ops they don’t fully understand.

But one thing you definitely don’t want to do is remind the interviewer of their own uncertainties. Talk confidently about the work, without going so deep into the technical weeds that the interviewer isn’t sure what you’re saying. Although this shorthand may demonstrate fluency in a multi-vendor environment, it can also suggest you can’t communicate well with the other departments.

You’re a Social Animal

For better or worse, a job interview is a social interaction. Some sysadmins and IT pros would gladly trade the spotlight for wrestling with a wonky script or normalizing office bandwidth.

Nonetheless, this can produce a disconnect. As one IT candidate reported by Dice.com said when asked to describe the ideal work environment, “I just want a job where I can go in a room, do my work and be left alone.”

That candidate probably speaks for many admins, developers, and other overworked helpdesks, but he didn’t get the job. Business people (including those who work for nonprofits and government) tend to celebrate charisma, and for good reason: The job is all about meeting client needs, which means talking to the customer to understand what they really want.

The good news? Your competition is other techies, probably just as geeky at heart.

The bottom line is that if you’re comfortable about your qualifications for the job — even if it is pushing your limits — that confidence will show through, and help you navigate the rocky spots. And who knows, you may be just who they’re looking for.