Cloud Computing and Other Buzz Words

The technology that drives health care today is changing in response to increase concerns about security and reliability, and external regulations like the security regulations in HIPAA.  In addition, the HiTech portion of the stimulus law this year has provided incentives for health care providers to adopt technology that allows for health data exchange and for quality reporting (which is a data driven process for providing outcome reporting for certain quality measures as defined by the Secretary of Health and Human Services).  There are a fair number of technology vendors that provide electronic health records (EHR) systems today, and also a fair number of vendors that have developed business intelligence or more sophisticated data reporting tools.  Health data exchange is a newer field; google and Microsoft have begun developing systems that allow users to establish a personal health record database, and some states have started planning for larger scale data repositories, but this concept is still at its beginning stages.

A buzz word today in technology is “cloud computing,” which is a fancy way of describing internet systems that businesses can rent from service providers to perform business tasks.  The idea is not new, even if the buzz word is; in days of yore, we called these “application service providers” or ASP’s for short.  I suppose that the IT marketing folks got sick of being compared with a nasty snake and thought clouds were better (or maybe more humorous if they had ever read Aristophanes).  Of course, the perjorative “vaporware” which roughly translates to a software vendor that markets a product it does not yet have to actually sell to people, also rings of clouds and things in the sky.  And the old “pie in the sky” as a way of saying “that’s a nice idea but has no hope of being useful down here where mere mortals live” could also relate to clouds.

That aside, there may be something to cloud computing for us mere mortals.  One of the important aspects of technology is how complex it actually is under the covers, and the degree and scope of support actually required to get the technology to work properly.  Larger businesses that have high concentrations of technology engineers and analysts are better equipped than the average business to deal with technology issues.  In this respect, cloud computing offers a business a way to “leverage” (another business term thrown casually around) the expertise of a fair number of technology experts without having to hire all of them on full time.  One of the dilemmas for business consumers, however, is the amount that one needs to be able to trust the technology partner they rent from.  This is the same problem that ASP’s originally faced years ago.  What happens to the data in the cloud when the cloud computing vendor either stops providing the service you are using, or just goes out of business?  How do the businesses work together on transitioning from one cloud to another, or from the cloud back in-house?  What if the business wants to host its own cloud onsite or at its existing hosting facility?  How are changes to the hosted application controlled and tested?  How often are backups performed, how often are they tested?  How “highly available” is the highly available system hosted?  How are disasters mitigated and what is the service provider’s disaster recovery/business continuity plan?  How are service provider staff hired and what clearance procedures are employed to ensure that staff aren’t felons that regularly steal identities?  The list of issues is a long one.

The other dilemma for businesses that want to use cloud computing services is that many of these services have a standard form contract that may not be negotiable, or essential parts of it may not be negotiable.  For example, most cloud computing vendors have hired smart attorneys who have drafted a contract that puts all the liability on the customer if something goes wrong, or otherwise limited liability so severely that the business customer will need to buy a considerable amount of business insurance to offset the risks that exist with the cloud, should it ever fail, rain, or just leak into the basement.

On the other hand, businesses that have their own IT departments have the same set of risks.  The difference, I think, is that many businesses do not have liability contracts with their otherwise at-will IT staff.  So, if things go horribly wrong (e.g., think “negligence”), the most that might happen to the IT person responsible is immediate termination (except in cases of intentional property theft or destruction, both of which may lead to criminal but not automatic civil liability for the IT person involved).  How much time does a business have to invest to develop and implement effective system policies, the actual systems themselves, and the staff to maintain those systems?

The advent of more widely adopted EHR systems in the U.S. will likely heat up the debate over whether to use cloud computing services or virtualized desktops that are hosted centrally by a hosting company in order to roll out the functionality of these systems to a broader base of providers (currently estimated at 1 in 5 presently using some EHR).  Companies that can cost less than the Medicare benefit to providers while helping providers comply with the security regulations will likely have the most success in the next few years.  Stay tuned!

Green IT: How Virtualization can Save Earth and Your Butt

Technology continues to evolve, providing people with new functionality, features, information, and entertainment.  According to Ray Kurzweil, a number of metrics for computer performance and capacity indicate that our technology is expanding at a linear or exponential rate.  Sadly, the physical manifestations of technology are also helping to destroy the planet and poison our clean water supplies.  According to the EPA, nearly 2% of municipal waste is computer trash.  While an improvement in recent years, only 18% of computers, televisions, and related solid waste is actually recycled by consumers, placing millions of tons of unwanted electronics into landfills each year.  Businesses contribute to this problem each year as they are major consumers of computers, printers, cell phones, and other electronics to operate their business.

Computers that are placed into a landfill pose a significant environmental threat to people and wildlife.  Electronics can contain a number of hazardous materials, such as lead, mercury, cadmium, chromium, and some types of flame retardants, which, in the quantities of disposed equipment, poses a real threat to our drinking water.  See the article here with the details. Lead alone in sufficient quantities can damage your central nervous system and kidneys, and heavy metals in your body will be retained such that over time you accumulate more of the substance until your body reaches a threshold over which you may experience fatal symptoms.  See Lead Poisoning Article. Mercury, cadmium and chromium aren’t any nicer to people or animals.

Everyone should recycle their electronics through a respectable electronics recycler (See Turtle Wings website for example).  However, you can also reduce your server fleet and extend the life of your computer equipment through virtualization.  (See an earlier post on virtualization on this blog).  Virtualization of your server equipment means that you will use fewer physical servers in order to present more virtual machines to your user community for accessing print, authentication, file sharing, applications, web, and other computer services on your network.  Fewer servers in use means that you will have fewer physical server devices to purchase over time and fewer servers to recycle at the end of their life.  Virtualizing your desktops can help by extending the useful life of your desktops (they are just accessing a centrally stored virtual desktop, on which all the processing and storage occurs, so a desktop with little RAM and CPU will work for longer), and also reducing the amount of electricity that your organization uses per computer (if you then switch to a thin client such as a Wyse terminal or HP computing device).

Virtualization can also improve your preparedness for disasters, whether by flood, virus, or terrorist.  For one thing, backing up the data file that represents your virtual servers is easier, can be done during normal business hours, and can be far more easily replicated to another site than the contents of a physical server.  Furthermore, virtualization can reduce the entry costs to implement a disaster recovery site because you can use less overall equipment in order to replicate data from your production environment, so your ongoing operating costs are reduced as compared to a physical server configuration.  Testing upgrades is easier because you can duplicate a production virtual server and test the upgrade before rolling it out to the live system (which costs less than buying another physical server and running a copy of the system on it to run the testing).  Virtualizing desktops also simplifies some of the support and administrative tasks associated with keeping desktops running properly (or fixing them when they stop working right).

So, before you buy another physical desktop or server, think about whether virtualization can help save Earth and you.

How Virtualization Can Help Your DR Plan

Virtualizing your servers can help you to improve your readiness to respond to disasters, such as fires, floods, virus attacks, power outages, and the like.  Popular solutions, such as VMWare’s ESX virtualization products, in combination with data replication to a remote facility, or backups using a third party application like vRanger can help speed up your ability to respond to emergencies, or even have fewer emergencies that require IT staff to intervene.  This article will discuss a few solutions to help you improve your disaster recovery readiness.

Planning

Being able to respond to an emergency or a disaster requires planning before the emergency arises.  Planning involves the following: (1) having an up-to-date system design map that explains the major systems in use, their criticality to the organization, and their system requirements; (2) having a policy that identifies what the organization’s expectations are with system uptime, the technical solutions in place to help mitigate risks, and the roles that staff within the organization will play during an emergency; and (3) conducting a risk assessment that reviews the risks, mitigations in place, and unmitigated risks that could cause an outage or disaster.

Once you have a system inventory, policy and risk assessment, you will need to identify user expectations for recovering from a system failure, which will provide a starting point for analyzing how far your systems are from user expectations for recovery.  For example, if you use digital tape to perform system backups once weekly, but interviews with users indicate an expectation that data from a particular system can’t be recovered manually if a loss of more than a few hours is experienced, your gap analysis would indicate that your current mitigation is not sufficient.

Now, gentle reader, not all user expectations are reasonable.  If you operate a database with many thousands of transactions worth substantial amounts in revenue every minute, but your DR budget is relatively small (or non-existent), users get what they pay for.  Systems, like all things, will fail from time to time, no matter the quality of the IT staff or the computer systems themselves.  There is truthfully no excuse for not planning for system failures to be able to respond appropriately – but then, I continue to meet people who are not prepared, so…

However, user expectations are helpful to know, because you can use them to gauge how much focus should be placed on recovering from a system failure, and where there are gaps in readiness, seeking to expand your budget or resources to help improve readiness as much as feasible.  Virtualization can help.

Technology

First, virtualization generally can help to reduce your server hardware budget, as you can run more virtual servers on less physical hardware – especially those Windows servers that don’t really do that much (CPU and memory) most of the time.  This, in turn can free up more resources to put towards a DR budget.

Second, virtualization (in combination with a replication technology, either on a storage area network, such as Lefthand, or through another software solution, for example, Doubletake) can help you to make efficient copies of your data to a remote system, which can be used to bring a DR virtual server up to operate as your production system until the emergency is resolved.

Third, virtual servers can be more easily backed up to disk using software solutions like vRanger Pro, which can in turn be backed up to tape or somewhere else entirely.

Virtualization does make recovery easier, but not pain-free.  There is still some work required to make this kind of solution work properly, including training, practice, and testing.  And you will likely need some expertise to help implement a solution (whether you work with a VMWare, Microsoft, or other vendor for virtualization).  On the other hand, not doing this means that you are left to “hope” you can recover when a system failure occurs.  Not much of a plan.

Testing and Practice

Once the technology is in place to help recover from a system failure, the most important thing you can do is to practice with this technology and the policy/procedure you have developed to make sure that (a) multiple IT staff can successfully perform a recovery, (b) that you have worked out the bugs in the plan and identified specific technical issues that can be worked on to improve the plan, and (c) that those who will participate in the recovery effort can work effectively under the added stress of performing a recovery with every user hollering “are you done yet?!?”.

Some of the testing should be purely technical: backing a system up and being able to bring it up on a different piece of equipment, and then verifying that the backup copy works like the production system.  And some of the testing is discussion-driven: table-top exercises (as discussed on my law web site in more detail here) help staff to discuss scenarios and possible issues.

All of the testing results help to refine your policy, and also give you a realistic view of how effectively you can recover a system from a major failure or disaster.  Some systems (like NT 4.0 based systems) will not be recoverable, no matter what you may do.  Upgrading to a recent version of Windows, or to some other platform all together, is the best mitigation.  In other cases, virtualization won’t be feasible because of current budget constraints, technical expertise, or incompatibility (not all current Windows systems can be virtualized because the system has unique hardware requirements, or otherwise won’t covert to a virtual system).  But, there are a fair number of cases where virtualizing will help improve recoverability.

Summary

Virtualization can help your organization recover from disasters when the technology is implemented within a plan that is well-designed and well-tested.  Feedback?  Post a comment.

White House Takes on Cybercrime

According to Yahoo News, President Obama plans to appoint a White House official to be in charge of coordinating the federal government’s response to cybercrime.  This comes after years of reports of identity theft, many tens of thousands of viruses aimed at security holes in mostly Microsoft operating systems like Windows 98, XP, and 2000, and increasing system security problems for infrastructure (like energy companies and utilities).  Click here for an article on hacking into the FAA air traffic control system.  Click here for a summary of attacks on the U.S. Defense Department and the U.S. electrical grid.

The problem is certainly not going away as the shadow market for hacking services is making a profit on the successful attacks of systems.  One matter not addressed today that might help improve security is the need for all information systems custodians to regularly report on security breaches.  The federal government does keep track and report on the number of attacks on federal government systems, but there is no single repository to keep track of attacks on private companies.  There is obviously no incentive for a private company to report security problems as this leads to fewer customers and could put the company out of business.  But even a single, national and anonymous reporting system would be a start to help gauge the depth of the problem.  Security problems are also a relevant consideration for consumers that might be giving data to a company to transact business, such as credit card, health, financial or other personal information.  Consumers should have the right to know about the security practices of businesses, and the effectiveness of these practices in protecting information from unauthorized use.

Furthermore, unless the market reflects the cost of security in the pricing of services, businesses will continue to operate without sufficient security in place, and our economy will continue to be at risk of being shut down by terrorists and hackers.  I suspect that this may be one of the areas where the market failure is so substantial that government intervention is justified to more seriously regulate computer security, especially in critical areas of the economy like banking, infrastructure, and the like.