Blog of Roger Greene, CEO

Cloudburst

I find it difficult to get a handle on cloud services from reading vendor marketing materials. I want to learn how organizations set up cloud services, what they really think, and just how much they are willing to rely on them. Amazon Web Services’s recent 3-day disruption in service led to some soul searching by those affected. One was Doug Kaye of the Conversations Network. I admire Doug’s passion for and success at making conference talks available to a broad audience. I find him refreshingly candid. Doug gives a thorough assessment of what happened and what they learned from the experience. He also links to another straightforward analysis by SmugMug’s Don MacAskill. I think anyone considering web services will benefit from reading both of these posts.

 

The Economics of Monitoring

When I think back to the late 80′s and how the smart guys I knew diagnosed network problems, I am struck by how far we have come. Often the only tools were ping, traceroute and the ability to think analytically. Now we have WhatsUp Gold and other products to map your physical network, monitor whether devices or applications are up, check bandwidth usage, sift through log files for important events, and more.

The industry has advanced not just because technology has improved. It is also because networks have become critical to the daily operations of most organizations. In the late 80′s, if a server went down, people shrugged, poked around, figured out what was going on, and rebooted it. Now there are many more devices, networks are more complex, and downtime costs real money. This has created a market for products like WhatsUp Gold to become increasingly capable and sophisticated.

Now, though, I realize that there is so much more that could be monitored that would make a huge difference in millions of people’s lives. Take, for example, water pumps in remote villages in the developing world. In ‘Fixing the Water Crisis‘, Ned Breslin talks about the lack of clean water in much of the world. People without access to clean water are forced to drink polluted, dirty water, which makes them sick.

Over time, well-intentioned organizations have funded the installation of wells in many villages. After a well is built, a village has access to clean water, health improves, and everybody is better off. The story ends badly in many cases, though. Several years later, the well breaks, and no one outside the village finds out about it. So the villagers revert to drinking unsanitary water. Now, however, some kids have been born and grown up drinking only clean water. When they are forced to drink dirty water, they get very sick, sick enough to die. Ned tells one such sad story in his talk.

What we need is a low-cost way to monitor devices like water wells in remote areas, places without electricity. And when a well breaks, to notify someone who can fix it. At present, the economics are not there. I hope that as monitoring technology continues to improve we or other company will develop an affordable solution and help improve the lives of millions of people.

IT Rocks!

There is something about teamwork leading to a big success that really makes me proud of our staff. Our recent move to a new office in Lexington is an example.

Personal attention for servers

We hadn’t moved in about seven years, during which we grew quite a bit. We now have a complex set of interrelated systems, so mapping out the move was no easy task! IT and Operations did extensive planning so we would have minimal disruption.

Measure twice, cut once

In preparation, IT continued our transition to the cloud and to a co-lo facility for public facing servers. For months they cajoled, prodded and pleaded with the telephone companies to be ready to transition our Internet link. They mapped out the cabling in our new office.

Keith setting up wireless

They developed a precise plan to ensure absolute minimum downtime, and did extensive testing to make sure we would have no glitches, but could recover quickly in case we did.

And they had superb support from Operations, who coordinated an extraordinary range of activities, vendors and staff.

Steve led an IT team of five people. When the move day – Friday – arrived, they started as late as they could to give Sales and Technical Support maximum access to systems for the day. By midnight our team had  systems up and running at the new office and had confirmed no data was lost. Then they had a full day of testing on Saturday. For individual users, on Sunday the the entire IT team volunteered to go to each desk to test network and phone access so users would get exactly same environment they were used to when they arrived on Monday.

Melissa managed everything to do with the physical side side of the move, including coordinating with our landlords and finding the movers. Thanks!

In IT, thanks to Keith, Jim, John from Madison, Aftab, Eric, Mike S. from Augusta and all of the other volunteers, with leadership and support from Steve and Mike M.

An exhausting move weekend, after months of preparation. And the result was stellar. When people arrived on Monday, all systems were working, and the transition to our new office was smooth.

Melissa managed the move

As if this wasn’t enough, a week later we moved our Livonia, Michigan office. Many of the same team flew out to manage the move, and that one went smoothly as well.

Two weeks, two flawless moves. Thanks, team! You make me proud.