Meaningful Use Gets A Definition

The Centers for Medicare and Medicaid (CMS) and the Department of Health and Human Services have released for public comment proposed rules that help, among other things, to define “meaningful use” in the context of electronic health records systems.  As astute readers will note, the federal government intends to start making incentive payments to qualifying providers in 2011 who can demonstrate meaningful use of a certified health record system.  2009 was spent trying to figure out just what that phrase means, and the proposed rules (available here) provide the first formal attempt at definition by CMS.

There are three stages of meaningful use, which must be demonstrated at various points during the incentive payout schedule, depending on when the provider adopts an EHR.  If, for example, a provider adopted an EHR in 2011, the provider would be required to demonstrate Stage 1 compliance in 2011 and 2012, Stage 2 compliance in 2013 and 2014, and Stage 3 compliance in 2015 in order to receive the incentive payments from the Medicare program.  (There is a helpful chart on page 46 of the draft regulations).  If a provider were instead to adopt in 2015, they would have to demonstrate Stage 3 compliance.  For those of you thinking about waiting a few years before adopting an EHR (apparently we “adopt” rather than “birth” these systems, though from the squeals of some users, I would think the pangs of birthing are a more appropriate metaphor), be forewarned: late adoption means the expectations are higher with regards to demonstrating meaningful use if you want to get an incentive payment.

Stage 1 requirements are described in the proposed § 495.6(c), and include the following items:

(c)(1) drug interaction checking

(c)(2) problem list

(c)(3) active medication list

(c)(4) active allergy list

(c)(5) basic patient demographics

(c)(6) record basic height, weight, blood pressure, BMI, and peds growth charts

(c)(7) smoking status

(c)(8) store lab results in structured data format

(c)(9) be able to produce a list of patients by disease condition

(c)(10) implement 5 clinical decision support rules based on provider specialty or priority

(c)(11) use electronic insurance eligibility

(c)(12) submit claims electronically

(c)(13) perform a medication “reconciliation”

(c)(14) provide a summary of care record for each referral, or “transition of care”

(c)(15) capacity to submit data to immunization registries

(c)(16) capacity to submit electronic surveillance data to public health agencies

(c)(17) comply with HIPAA security regs via risk assessment and mitigations

And if that wasn’t enough, section (d) gives some more rules to comply with:

(d)(1) use “computerized provider order entry”

(d)(2) send prescriptions electronically

(d)(3) report ambulatory quality measures to CMS

(d)(4) send patient reminders for preventive care

(d)(5) provide patients with electronic health record data on request

(d)(6) provide patients with timely electronic access to their health data

(d)(7) provide each patient a clinical summary at each visit

(d)(8) exchange data with HIE’s

There is a helpful chart of these requirements that starts on page 103 of the proposed regulations, and points out where the criteria vary depending on whether a hospital or individual provider is attempting to demonstrate compliance.  The regulations also propose measures to demonstrate compliance with each requirement in Stage 1, which is included in the respective requirement’s section.

Now, you probably are wondering what the Stage 2 and Stage 3 criteria are.  From my reading of the regulations, these have not yet been promulgated.  CMS is, however, working on it.  They even give you a teaser on page 109 of some potential Stage 2 requirements.

Many of these requirements are hardly a shocker (recording a patient’s name and their height, weight, and what medications they are on, for example).  These core requirements are essential to any system purporting to manage health.  There are, however, a number of very interesting requirements which may be much harder for systems to obtain.  For example, requiring that 80% of patient insurance information be verified electronically may be quite a stretch (and in many cases, this is not handled by a health record system but instead by a practice management system).

Also, providing patients with electronic access to their health data within 96 hours of its receipt may also require a wave of web site, firewall, and secure socket layer certificate purchases by providers, many of whom do not support this kind of access today to their health record systems.  And while it would be wise to perform a medication “reconciliation” at each visit, it is not clear how this would be accomplished from the regulations as written.

Stay tuned for additional postings on this very important topic!  And be sure to send in comments to CMS on these rules based on your experiences with EHR technology in your practice.

The Future of Health IT

An important aspect of President Obama’s health plan (partly funded through this year’s stimulus package) is health technology.  As noted in a prior blog post, section 4101 of the ARRA provides qualifying health providers with Medicare reimbursement incentives for implementing Health IT that meets the statutory criteria set out in that section: meaningful use, participation in a health data exchange, and clinical outcomes reporting.  By setting standards and providing incentives, the federal health policy will have a substantial impact on health care technology over the next five years, as billions of dollars are poured into health IT investment.

The question presented here is where will all of this public and private investment lead Health IT in the next few years?

Data Exchange And Availability

One of the areas of major emphasis in the ARRA is the ability of health care providers to more easily share information with other providers, patients, and others who have a legal right to receive such data.  In particular, emphasis has been placed on the ability to transmit data to health exchanges, and to be able to produce data for the Feds on health outcomes (such as reporting hemoglobin a1c’s values over time to evaluate if a diabetic patient is responding positively to the care provided).  Health data exchanges today are on the rise, according to ihealthbeat.org, up to 57 operational exchanges from 42 the year prior.  These health exchanges are being used to exchange data between individual providers in an effort to improve care coordination and to improve care quality.

More specifically, for patients with several doctors who may specialize in a variety of treatments or health conditions, health exchanges have the potential to ensure that lab data ordered by one physician is made available in a secure and reliable manner to all the physicians involved in providing health care.  Health exchanges also can ensure that a patient’s medical history (particularly their prescription history) is available in a consistent format to all care providers, saving time at each visit and reducing risks to patients that might forget a prescription or past medical procedure.  Sharing lab results also has the potential to reduce costs and patient injury by reducing the number of duplicative tests ordered for the same patient by different providers.  This is a common problem for patients with a coordinating care provider who end up in the hospital and the attending physician is stuck ordering duplicate tests.

Looking into the future, I would expect that health data exchanges (HDE) would become more prevalent so long as the total cost to implement and maintain the HDE are less than the costs saved/avoided by the data available in the HDE.  One of the other factors that will impact the growth of HDEs is the number of peer-reviewed studies of their efficacy.  Today, there is relatively little information on this topic because most HDEs are new or still under development, but in the next few years more definitive information should be available for analysis and review by eager technologists and researchers.

One of the great challenges for the HDE movement is maintaining patient privacy.  HIPAA was originally implemented in part to specifically address patient privacy, as have a number of other state laws on this topic (for example, the Maryland Medical Records Act, See Md. Health-Gen. Code Ann. § 4-301 et seq.).  And other states are getting in on the action to protect consumer privacy, including Massachusetts, Minnesota, and Nevada, just to name a few.

However, laws alone may not be enough to effectively regulate and protect the availability of health data.  In the present HIPAA enforcement regulations (which were modified by ARRA this year), the top fines (where the act in violation of the security regulations was a negligent rather than an intentional one) are relatively low compared to the potential size of an HDE (for example, if a company like google or Microsoft was to become a dominant HDE) because the fines are a flat rate per incident rather than being scaled according to the company’s gross revenue or the severity of the breach or finding.  The ARRA did move in the right direction this year by implementing a four-tiered approach to violations from the original enforcement authority under HIPAA, but further scaling may be required for this to become an effective deterrent to lax security practices.

Furthermore, having a patchwork of privacy laws increases the overall cost of compliance for HDEs, which increases the cost to implement these systems without necessarily improving the actual security of the information stored at the HDE.  This is caused by regulatory requirements of overlapping but also potentially conflicting laws, along with the need to respond to multiple authorities with the right to audit or investigate the HDE (as larger HDEs will undoubtedly operate across state lines).  Sadly, I imagine that this problem will probably get worse before it gets better, given the number of relatively autonomous sovereign powers within our country (5o states + the federal government) and the scope and scale of the privacy issue being considered.

Attitudes Towards Privacy

Our future privacy policies may also be impacted by the attitude of our youth to privacy today.  Social networking sites, for example, allow for the exposure of a lot of information about the youngest among us, but the predominant users of these systems don’t seem to mind very much.  Now, of course, facebook is not known for the health data available on its users, so who knows whether college kids would be posting their latest hemoglobin values as readily as they do about the parties they were attending and pictures snapped of them by the college paparazzi, but it stands to reason that the next generation’s attitudes towards privacy will be substantially different than the present one that has been called to govern the nation.

The result may be a reduction in the concern about privacy with an increasing criminal penalty for those that engage in theft of information.  For example, perhaps instead of worrying as much about whether health data is squirreled away in an underground bunker with Dick Cheney, the future leaders of the nation will make this data generally available via the internet, ultimately reducing its value to would-be thieves.  For myself, I can’t say it matters much if others know than I have high cholesterol and a family history of diabetes, but I also don’t think there is much stigma attached to either of these conditions as there might have once been (or might still be for other health issues).

Data Quality and Trusted Sources

HDEs will also need to address head on the quality and reliability of data stored in their databases.  Today, data systems do not generally go beyond the initial setup of some kind of private network and the file formats that are acceptable for data to be exchanged.  Inherently, one system trusts the data it receives from the other and merely re-publishes it into its own database, identifying the source of the data.  Usernames and passwords may just not be enough for everyone to know that the data being sent or received is accurate and reliable.

In addition, even though HIPAA (and some other laws) have placed a small emphasis on technical encryption, the truth is that little has really been done with these technologies for most systems today to ensure that data entered is not repudiated later by the person that purportedly entered it.  For example, many commercially available database systems are not natively encrypted.  Local area network activity on the wire is rarely encrypted, again relying on the border security devices to keep outsiders out of LAN activity.  Passwords are not consistently complex across an enterprise (especially where multiple database systems maintain their own passwords and accounts), and certainly cannot reasonably be changed frequently enough to ensure the password has not been compromised (without the user community revolting against the IT staff).  And users routinely share passwords even though there is a federal law against it and in spite of the numerous repeated messages from system administrators to not share passwords.

Furthermore, data exchanged between systems relies on the initial configuration of the networking that connects the two systems to remain uncompromised.  There is no further system verification to ensure that messages actually received across these systems are correct in the typical data exchange design.  TCP itself was designed with a checksum in each packet, but that only tells the receiver if the packet received matches what was intended to be sent, not whether the data sent is coming from the human/system source alleged (e.g., the laboratory technician or physician that actually created the entry in the first place).

I anticipate that the future of authentication will be to move towards far more sophisticated and multi-level authentication (even though the biometric movement seemed to have lost steam, at least in the general consumer market).  For example, instead of or in addition to a username/password, systems may also generally implement a token, or other physical card to grant access (such systems exist and are in general use today for some systems).  Other security measures may involve thumbprints or biometrics.  I would also imagine that more sophisticated encryption algorithms could be used beyond 128-bit cipher, and that encryption might occur at a more basic level than it does today (if transmissions are encrypted at all).  For example, databases themselves may be encrypted at a record or table level, or application access could be managed through an encrypted socket instead of plain text as many operate now.

Beyond user access to put in data, surely there be could some additional layer of verification that could occur once data has been received from a producer system which could be, by design, independently verified before being committed to the receiving system.  The alteration (or just erroneous entry) of data in transport from one system to another creates the real possibility of a bad health care decision by professionals using the data.  This is certainly one of the major weaknesses of consumer level HDEs such as those from google or Microsoft which must rely on the consumer to enter their own lab and pharmaceutical information into the database when that data is not available electronically, or on data providers that rely on administrative or clerical staff to actually do the data entry without further review before distribution.

HDE Information Security

Today, a number of technologies exist that allow for data backup and redundancy to ensure that systems can be highly available and resistant to significant environmental or system disasters.  One category of technology is called “cloud computing,” which is a kind of modern equivalent to what application service providers (ASP) of the 1990’s were offering, or what the ancient mainframes of yesteryear offered to computing users back in the bad old days of the 1970’s.  What is fundamentally different today, however, is the possibility of having massively redundant and distributed information systems that belong to a cloud, where both ASPs and mainframe computing was often centralized into one server room or series of server rooms in one facility.

A common example of computing in the cloud today is gmail, which is an email service provided by google for free to consumers.  There are still, somewhere, servers connected to the internet and controlled by google that will respond to SMTP requests, but google most likely has these servers distributed all over the planet and connected to a larger, redundant network infrastructure.  Data stored on these servers are likely real-time replicated so that all gmail replication partners are up to date, regardless of which one you actually connect to when you use your web browser to navigate to your email account.  Gmail has been around for some time now, and there are a fair number of users (26 million according to one article as of last September; wikipedia claims there are 146 million gmail users each month as of July 2009).  Perhaps Health IT will be the next internet “killer app.”

And looking down the road, the future of Health IT likely involves some kind of “cloud computing” model where health data is not stored locally on an organization’s server.  This model will provide for additional flexibility with data transfer, improved system redundancy, and higher availability than is typically possible in a single enterprise or within a single server room.

Cloud computing, however, does pose other security and privacy concerns.  (See this article on CNET that addresses some of these same concerns)  For example, will staff of the cloud computing service have some kind of access to the actual data entered into the system?  Will these systems have a way of keeping those administrators from changing or accessing data (for example, by encrypting the data to place it out of reach of administrators)?  Who is liable for loss of the data?  Will the HDE seek to (and will courts and lawmakers allow it to) unreasonably limit liability for unauthorized access?  Will the HDE be indemnified by a government agency?  Will the HDE pay for itself by allowing advertisers access to data stored by the HDE?  Will it utilize a more democratic approach (for example, as facebook has recently been employing to ratify adoption of changes to policies in place that affect its user community)?

Stay tuned.

Cloud Computing and Other Buzz Words

The technology that drives health care today is changing in response to increase concerns about security and reliability, and external regulations like the security regulations in HIPAA.  In addition, the HiTech portion of the stimulus law this year has provided incentives for health care providers to adopt technology that allows for health data exchange and for quality reporting (which is a data driven process for providing outcome reporting for certain quality measures as defined by the Secretary of Health and Human Services).  There are a fair number of technology vendors that provide electronic health records (EHR) systems today, and also a fair number of vendors that have developed business intelligence or more sophisticated data reporting tools.  Health data exchange is a newer field; google and Microsoft have begun developing systems that allow users to establish a personal health record database, and some states have started planning for larger scale data repositories, but this concept is still at its beginning stages.

A buzz word today in technology is “cloud computing,” which is a fancy way of describing internet systems that businesses can rent from service providers to perform business tasks.  The idea is not new, even if the buzz word is; in days of yore, we called these “application service providers” or ASP’s for short.  I suppose that the IT marketing folks got sick of being compared with a nasty snake and thought clouds were better (or maybe more humorous if they had ever read Aristophanes).  Of course, the perjorative “vaporware” which roughly translates to a software vendor that markets a product it does not yet have to actually sell to people, also rings of clouds and things in the sky.  And the old “pie in the sky” as a way of saying “that’s a nice idea but has no hope of being useful down here where mere mortals live” could also relate to clouds.

That aside, there may be something to cloud computing for us mere mortals.  One of the important aspects of technology is how complex it actually is under the covers, and the degree and scope of support actually required to get the technology to work properly.  Larger businesses that have high concentrations of technology engineers and analysts are better equipped than the average business to deal with technology issues.  In this respect, cloud computing offers a business a way to “leverage” (another business term thrown casually around) the expertise of a fair number of technology experts without having to hire all of them on full time.  One of the dilemmas for business consumers, however, is the amount that one needs to be able to trust the technology partner they rent from.  This is the same problem that ASP’s originally faced years ago.  What happens to the data in the cloud when the cloud computing vendor either stops providing the service you are using, or just goes out of business?  How do the businesses work together on transitioning from one cloud to another, or from the cloud back in-house?  What if the business wants to host its own cloud onsite or at its existing hosting facility?  How are changes to the hosted application controlled and tested?  How often are backups performed, how often are they tested?  How “highly available” is the highly available system hosted?  How are disasters mitigated and what is the service provider’s disaster recovery/business continuity plan?  How are service provider staff hired and what clearance procedures are employed to ensure that staff aren’t felons that regularly steal identities?  The list of issues is a long one.

The other dilemma for businesses that want to use cloud computing services is that many of these services have a standard form contract that may not be negotiable, or essential parts of it may not be negotiable.  For example, most cloud computing vendors have hired smart attorneys who have drafted a contract that puts all the liability on the customer if something goes wrong, or otherwise limited liability so severely that the business customer will need to buy a considerable amount of business insurance to offset the risks that exist with the cloud, should it ever fail, rain, or just leak into the basement.

On the other hand, businesses that have their own IT departments have the same set of risks.  The difference, I think, is that many businesses do not have liability contracts with their otherwise at-will IT staff.  So, if things go horribly wrong (e.g., think “negligence”), the most that might happen to the IT person responsible is immediate termination (except in cases of intentional property theft or destruction, both of which may lead to criminal but not automatic civil liability for the IT person involved).  How much time does a business have to invest to develop and implement effective system policies, the actual systems themselves, and the staff to maintain those systems?

The advent of more widely adopted EHR systems in the U.S. will likely heat up the debate over whether to use cloud computing services or virtualized desktops that are hosted centrally by a hosting company in order to roll out the functionality of these systems to a broader base of providers (currently estimated at 1 in 5 presently using some EHR).  Companies that can cost less than the Medicare benefit to providers while helping providers comply with the security regulations will likely have the most success in the next few years.  Stay tuned!

Health IT & Open Source

The truth is that I may just be getting annoyed about this debate.  A recent blog posting on Wired (click here for the article) frames the debate over health technology in terms of open source versus legacy or proprietary code, the latter being the enemy to innovation, improved health outcomes, and usability.

First off, an open source program is merely governed by some version of the GPL, which means that other developers can reverse engineer, make derivate works, or otherwise include your open source code in their subsequent open source code.  Developers that freely work together to write something cool are developers writing code.  They aren’t necessarily health experts, physicians, efficiency gurus; in fact, they may not even have health insurance if they live in the U.S. (1 in 6 of us are uninsured).  The fact that code is open source does have a big impact on how U.S. copyright law protects the work, but it doesn’t mean that somehow an open source developer is more in tune with health IT requirements, how to best integrate the system into a physician’s practice, or even necessarily what the actual requirements are for a physician to see a patient and document the visit to avoid liability for fraud or malpractice.  That’s because for developers, requirements come from outside of the development community, from users.

And guess what – proprietary developers of software listen to their user community to understand their requirements.  It’s part of the job of developers, regardless of whether the code is open source or proprietary.  And, for everyone participating in the global economy, the people that pay for your product generally drive the features and functionality in it.  If you can’t deliver, then your user base will go find someone else who can deliver.

Now, for larger health organizations, health records systems are a multi-year investment.  This inherently locks that health organization into a longer term, and more conservative, relationship with their health IT vendor, which tends to reduce the amount of change introduced into a health records system over time – especially for the larger vendors that have a lot of big clients.  The little developer out there writing code at 3am is certainly going to respond to market changes far more quickly than a really big corporation with a health IT platform.  But you know what?  Try getting the little guy to support your 500 desktop installations of his software 24×7.  Do you really think he can afford to staff a help desk support function around the clock for your business?  What happens when he has two customers with emergencies?  Or he wants to get some sleep?  And what about change control?  Even big vendors stumble in testing their code to make sure it works and is secure before releasing it (think Microsoft).  Solo, open source developers, even working in informal teams, are going to miss at least as often as a larger vendor, and introducing a lot more changes just increases the frequency that an untested change becomes an “unpublished feature” aka “blue screen of death.”  Trust me on this one: the health care user base is not going to be very tolerant of that.

Repeatedly, I hear the refrain that this stimulus money is going to go to systems that can be put to a “meaningful use,” and that is going to exclude rogue open source Health IT developers from being funded, squelching innovation in the market place.  I imagine that complying with the security regulations under HIPAA probably hinder innovation, too, but they increase the reliability of the system vendors that remain in the market place and reduce the risk to the data of patients that might be in their computer systems.  Setting minimum standards for health records systems may favor incumbent systems, but honestly – is that so wrong?  Isn’t the trade off here that when someone buys a system that is certified, they can have the satisfaction of knowing that someone else without a vested interest in the product, thought it had certain features or a proven record of delivering certain outcomes?  Perhaps the certifiers aren’t neutral because they come from the industry of EHRs, but if I recall correctly, the people that run the internet have committees with representatives from the internet industry, yet I rarely hear that the standards for the POP3 protocol unfairly burden new or open source developers.

That someone set standards for EHRs like a government agency is a lot like the government setting the requirements for you to receive a driver’s license.  Everyone who drives needs to understand what the red, octogonal sign with the capital letters S T OP means.  On the other hand, you may never parallel park again, but you better learn how to do it if you want your license to drive in Maryland.   Standards are always a mixed bag of useful and not-so-useful rules, but I don’t think there are too many people out there arguing that the government should not set minimum standards for drivers.  A certification requirement for EHRs to establish minimum standards is no different.  Ask the JCAHO people about it.  Ask the HIPAA police.  Ask the IT people you know.  If you are going to develop an EHR, you better secure it, make sure the entries in the database are non-repudiatable, and have a disaster recovery approach.  Don’t know what these things are?  Do your homework before you write a computer system.

Now, another refrain has been that look at how all of these proprietary systems have failed the world of health provisioning.  For example, look at how more kids died at the Children’s Hospital ER in Pittsburg after the hospital implemented an EHR (I can feel a class action lawsuit in federal court).  Who implements EHR’s in ER’s?  So the doctor is standing there and a patient is having a heart attack.  What should the doctor’s first act be?  To register the patient into the EHR and record his vitals?  I would think the doctor should be getting out the paddles and worrying about the patient’s heart beat, but then, I am an attorney and systems guy, not a physician.  Look – dumb decisions to implement a computer system should not lead to subsequent critics blaming the computer system for not meeting the requirements of the installation.  EHR is not appropriate every place patients are seen or for every workflow in a health care provider’s facility.  No knock on the open source people, but I don’t want my ER physician clicking on their software when I am dying in the ER, either.  I don’t want my doctor clicking anything at all – I want her to be saving me.  That’s why I have been delivered to the ER.

Now, VistA is getting a lot of mileage these days as an open source, publicly funded, and successful example of EHR in action.  And it is free.  But in fairness, VistA is not a new piece of software recently written by three college kids in a garage somewhere in between World of Warcraft online gaming sessions.  This program has been in development for years.  And “free” is relative.

For example, if you want support, you need to pay for it.  If you want to run it in a production environment, you will need to buy equipment and probably get expert help.  If you want to implement it, you will need to form a committee, develop a project plan, implement the project intelligently with input from your users, and be prepared to make a lot of changes to fit this system (or any system) into your health facility’s workflows.  And if you find yourself writing anything approaching software, that will cost you something, too, as most health care providers do not have a team of developers available to them to modify any computer system.  So, “free” in this context is relative, and genuinely understates the scope and effort required to get any piece of software to work in your facility.  “Less” may be a more appropriate adjective.  But then, that’s only true if you can avoid costly modifications to the software, and so far, there is no single EHR system that works in every setting, so expect to make modifications.

That’s my rant.  Happy EHR-ing!

The Battle Over Health IT Has Begun

The battle lines on how to spend the money for technology to improve health care are beginning to be drawn.  As a former director of an IT department at a health center which implemented a proprietary health record system in 2003, I can offer a useful perspective on some of the issues.  Phillip Longman’s post on health records technology discusses the issue of using a closed versus an open source health records system, which is part of the larger debate on open source and its impact on application development online.

I’m generally a fan of the open source community.  The shareware people were developing useful applications and offering them to the public ever since I started using a PC as a kid back in the 80’s.  There is a lot to be said for application development that is done in a larger community  where sharing is ok.  For example, my blog is a WordPress blog, which is an open source blogging software which provides a platform not just for writers like me, but also for developers to create cool plugins for WordPress blogs that do all sorts of nice things like integrate with Google Analytics, backup your blog, or modify your blog’s theme, just to name a few that I happen to use regularly (thanks all of you that are linked to).

In 2003, we looked at a number of health records systems, ultimately allowing our user community at the time to choose between the two finalists, both of which were proprietary systems.  One of my observations at the time was that there was a wide array of information systems that were available to health care providers, some of which were written by fellow practitioners, and others that were written by professional developers.  I would be willing to bet that today there are even more health IT systems out in the market place.  We ended up going with a product called Logician, which at the time was owned by MedicaLogic (now a subsidiary of the folks at GEMS IT, a division of General Electric).

Logician (now called Centricity EMR) is a closed source system that runs in Windows, but allows for end users to develop clinical content (the electronic equivalent to the paper forms that providers use to document care delivery) and to share that clinical content with other EMR users through a GE-hosted web site for existing clients of the system.  In addition, Logician has a substantial following to support a national user group, CHUG, and has been around long enough for a small cottage industry of developers to create software to integrate with Logician (such as the folks at Kryptiq, Biscom, and Clinical Content Consultants who subsequently sold their content to GEMS IT for support).

After six years of supporting this system, I can assure you that this technology has its issues.  That’s true, of course, of all most information systems, and I would not suggest that the open source community eclectic collection of developers is necessarily any less buggy or any easier to support.  And, in fact, I don’t have any opinion at all as to whether health records would be better off in an open source or proprietary health record system.  Health professionals are very capable of independently evaluating the variety of information systems and choosing a system that will help them do their jobs.  One of the big reasons that these projects tend to fail is a lack of planning and investment in the implementation of the system before the thing gets installed.  This process, which, when done right, engages the user community in the project to guide it to a successful go live, is probably more important and actually takes more effort than the information system itself.

Mr. Longman criticizes the development model of “software engineers using locked, proprietary code” because this model lacks sufficient input from the medical users that ultimately must use the system in their practices.  I suppose there must be some health records systems out there that were developed without health provider input, but I seriously doubt they are used by all that many practices.  I do agree with Mr. Longman that there are plenty of instances where practices tried to implement a health records system and ended up going back to paper.  We met several of these failed projects in our evaluation process.  But I would not conflate proprietary systems with the failure to implement; proprietary systems that actually include health providers in their development process can be successfully implemented.  Open source can work, too.  As Mr. Longman points out, the VA Hospital system has been using an open source system now called VistA which works for the VA hospital system’s closed delivery system (patients at the VA generally get all of their care at a VA institution and rarely go outside for health care).

My point is that the labels “open source” and “proprietary” alone are not enough to predict the success or failure of a health records system project.  Even a relatively inexpensive, proprietary, and functionally-focused system that is well implemented can improve the health of the patients served by it.  There is a very real danger that the Obama administration’s push for health IT will be a boondoggle given the scope and breadth of the vision of health IT in our country.  But the health industry itself is an enormous place with a wide variety of professionals, and the health IT market place reflects this in the varied information systems (both open source and proprietary) available today.  I would not expect there to be any one computer system that will work for every health care provider, regardless of who actually writes the code.

How Virtualization Can Help Your DR Plan

Virtualizing your servers can help you to improve your readiness to respond to disasters, such as fires, floods, virus attacks, power outages, and the like.  Popular solutions, such as VMWare’s ESX virtualization products, in combination with data replication to a remote facility, or backups using a third party application like vRanger can help speed up your ability to respond to emergencies, or even have fewer emergencies that require IT staff to intervene.  This article will discuss a few solutions to help you improve your disaster recovery readiness.

Planning

Being able to respond to an emergency or a disaster requires planning before the emergency arises.  Planning involves the following: (1) having an up-to-date system design map that explains the major systems in use, their criticality to the organization, and their system requirements; (2) having a policy that identifies what the organization’s expectations are with system uptime, the technical solutions in place to help mitigate risks, and the roles that staff within the organization will play during an emergency; and (3) conducting a risk assessment that reviews the risks, mitigations in place, and unmitigated risks that could cause an outage or disaster.

Once you have a system inventory, policy and risk assessment, you will need to identify user expectations for recovering from a system failure, which will provide a starting point for analyzing how far your systems are from user expectations for recovery.  For example, if you use digital tape to perform system backups once weekly, but interviews with users indicate an expectation that data from a particular system can’t be recovered manually if a loss of more than a few hours is experienced, your gap analysis would indicate that your current mitigation is not sufficient.

Now, gentle reader, not all user expectations are reasonable.  If you operate a database with many thousands of transactions worth substantial amounts in revenue every minute, but your DR budget is relatively small (or non-existent), users get what they pay for.  Systems, like all things, will fail from time to time, no matter the quality of the IT staff or the computer systems themselves.  There is truthfully no excuse for not planning for system failures to be able to respond appropriately – but then, I continue to meet people who are not prepared, so…

However, user expectations are helpful to know, because you can use them to gauge how much focus should be placed on recovering from a system failure, and where there are gaps in readiness, seeking to expand your budget or resources to help improve readiness as much as feasible.  Virtualization can help.

Technology

First, virtualization generally can help to reduce your server hardware budget, as you can run more virtual servers on less physical hardware – especially those Windows servers that don’t really do that much (CPU and memory) most of the time.  This, in turn can free up more resources to put towards a DR budget.

Second, virtualization (in combination with a replication technology, either on a storage area network, such as Lefthand, or through another software solution, for example, Doubletake) can help you to make efficient copies of your data to a remote system, which can be used to bring a DR virtual server up to operate as your production system until the emergency is resolved.

Third, virtual servers can be more easily backed up to disk using software solutions like vRanger Pro, which can in turn be backed up to tape or somewhere else entirely.

Virtualization does make recovery easier, but not pain-free.  There is still some work required to make this kind of solution work properly, including training, practice, and testing.  And you will likely need some expertise to help implement a solution (whether you work with a VMWare, Microsoft, or other vendor for virtualization).  On the other hand, not doing this means that you are left to “hope” you can recover when a system failure occurs.  Not much of a plan.

Testing and Practice

Once the technology is in place to help recover from a system failure, the most important thing you can do is to practice with this technology and the policy/procedure you have developed to make sure that (a) multiple IT staff can successfully perform a recovery, (b) that you have worked out the bugs in the plan and identified specific technical issues that can be worked on to improve the plan, and (c) that those who will participate in the recovery effort can work effectively under the added stress of performing a recovery with every user hollering “are you done yet?!?”.

Some of the testing should be purely technical: backing a system up and being able to bring it up on a different piece of equipment, and then verifying that the backup copy works like the production system.  And some of the testing is discussion-driven: table-top exercises (as discussed on my law web site in more detail here) help staff to discuss scenarios and possible issues.

All of the testing results help to refine your policy, and also give you a realistic view of how effectively you can recover a system from a major failure or disaster.  Some systems (like NT 4.0 based systems) will not be recoverable, no matter what you may do.  Upgrading to a recent version of Windows, or to some other platform all together, is the best mitigation.  In other cases, virtualization won’t be feasible because of current budget constraints, technical expertise, or incompatibility (not all current Windows systems can be virtualized because the system has unique hardware requirements, or otherwise won’t covert to a virtual system).  But, there are a fair number of cases where virtualizing will help improve recoverability.

Summary

Virtualization can help your organization recover from disasters when the technology is implemented within a plan that is well-designed and well-tested.  Feedback?  Post a comment.