Final HIPAA Security Regulations and EHRs

Note: this article was originally published in Maryland Physician Magazine in its May/June 2013 issue.

The HiTech Act in 2009 set in motion a series of changes to the HIPAA rules that govern the use, disclosure and protection of protected health information (“PHI”).  The Department of Health and Human Services (“HHS”) subsequently issued interim regulations in response to these changes in the law, and this year issued a final regulation as of March 26, 2013 that requires compliance by covered entities and business associates within 180 days.  These final HIPAA security regulations make a number of important changes which may impact your relationship with vendors that provide you with electronic health record (“EHR”) licensing and support.

First, prior to HiTech, business associates of covered entities were not required to comply with the security rules and standards set forth in the HIPAA security regulations.  HiTech changed the applicability of the security regulations to include business associates.  The final regulation from HHS implements this provision of the HiTech Act, but with a twist: subcontractors to business associates are also defined as business associates within the final regulation.  What this means is that EHR vendors and their subcontractors must fully comply with the HIPAA security rules, not just with “reasonable” security measures.

Second, prior to HiTech, there was no federal requirement that a covered entity or business associate report a security breach that resulted in the disclosure of protected health information (“PHI”).  HHS subsequently issued interim regulations to implement these notification requirements, and as of March 26, 2013, HHS issued final regulations that alter the assumptions and exceptions to what constitutes a “breach” under HIPAA.  In addition, business associates and subcontractors are obligated to report security breaches to covered entities.

For providers that are at the beginning of their search for an EHR vendor, have an attorney review any proposed contract between your organization and the vendor to ensure that the business associate provisions comply with the final regulations.  If you already have an existing relationship, work with your attorney to ensure that the contract in place complies with the final regulatory requirements.  All business associate agreements must come into compliance with the final regulations by September, 2014.

In recent years, some EHR vendors have moved to “cloud”-based data storage and access solutions for their clients.  These cloud systems are designed so that provider data collected by the EHR is stored at a remote data center, and made available over an internet connection with the provider.  Some EHR vendors subcontract with a third party to provide the cloud data storage.  More likely than not, that subcontractor is now a business associate under the final regulations and takes on the same obligations as the EHR vendor with regards to your data.  The final regulations require that a covered entity’s contract with their business associate require subcontractor compliance with the final security regulations.

Beyond compliance issues, providers will want to evaluate whether an EHR vendor that hosts your data in the “cloud” has really made sufficient provisions for security.  Such an evaluation makes good business sense because of the incredibly negative consequences of any security breach that results in a loss of PHI for a health care provider.  For example, does the vendor comply with a recognized, national security standard (like NIST)?  Is the EHR vendor, or the data center it uses for storing your data, audited against a SAS standard like SAS-70?  What are the security practices and security devices in place at the EHR vendor to protect your data?  If the vendor will host your data, what are its disaster recovery and data backup procedures?  Are those procedures regularly tested?

Providers and their counsel should also evaluate what, if any, additional provisions should be negotiated into any final agreement with the EHR vendor concerning the vendor’s compliance with a security standard, commitment to security procedures, and related obligations (such as maintaining appropriate border security and/or appropriate encryption for data during its transmission).

The changes in HIPAA compliance mean that providers cannot simply treat EHR vendors as a “black box” into which providers place PHI, and rely on the EHR vendor’s representations that they know best regarding security.  In addition, because the scope of HIPAA now covers more than just covered entities and business associates, but also most subcontractors of business associates that handle PHI, more entities are at risk for substantial fines for failing to comply with the applicable security standards.  All providers should work with their counsel to analyze and address compliance with the final regulations.

Reported PHI Breaches

The Department of Health and Human Services (“HHS”) maintains an online list of covered entities and business associates that have experienced PHI breaches where more than 500 individual patient records were involved.  As of the writing of this post, a total of 572 reported breaches are listed on this website.  What can we learn from this information?

First, the dataset covers breaches reported from September, 2009 through February, 2013.  A total of more than 21 million patient records are listed on this report (though it is likely there is some duplication of patient records between data breaches reported here).  These incidents total less than the single data loss reported by the Department of Veterans Affairs in 2006 when a single laptop was stolen from an employee’s home that contained in excess of 26 million records.  Nonetheless, a significant amount of PHI has been lost or stolen and reported to HHS over the last three and a half years.

Second, the most common scenarios for PHI breaches are tape backups that are lost, followed by theft.  Almost 6 million patient records were affected by this kind of data loss.  The theft or loss of a laptop came in fourth, affecting about 2.3 million patient records.  Theft generally accounted for more than one third of all records compromised, followed next by loss (which probably includes scenarios like we accidentally put the backup tapes in the dumpster, or the tape fell out of my bag between the office and my car), also accounting for about one third of all records compromised.  Hacking appears down the list, affecting a total of 1.3 million patient records.

Third, a little more than half of data breaches appear to involve a business associate of a covered entity in terms of patient records breached.  However, only 92 of the 572 data breaches note a business associate’s involvement, which tends to suggest that when a business associate is involved, more records on average are affected by the data breach.  This is consistent with the expectation that technology vendors like those that implement and/or host electronic health records often do so for more clients and are a bigger target for data theft or hacking and computer viruses.

With the change in breach notification in the final HIPAA regulations recently issued by HHS, it will be interesting to see if there are more breach notifications published to HHS’ web site.

Changes in HIPAA Breach Notification Rule

HHS recently released the final regulations that revise certain provisions of HIPAA, including the HIPAA breach notification rule.  Congress, in enacting the HiTech Act in 2009, included a statutory requirement that covered entities report breaches that involved the unauthorized access or loss of protected health information (“PHI”).  HHS then promulgated an interim rule to implement this statutory provision.  That interim rule required reporting of the breach under the “significant risk of financial, reputational or other harm” standard.  Criticism was subsequently leveled at this standard as being too subjective.  HHS just recently issued its final rule (effective on March 26, 2013) that changes the breach reporting rule in two ways.

First, if there is a breach that involves PHI, and the breach does not fall within a regulatory exception, the presumption of the regulation is that the breach must be reported.  This means that a party that experiences a loss of PHI cannot assume, on the grounds that the loss was uncertain to cause significant harm to the patients, that notification of the breach was not required.

Second, the final regulation replaces the interim rule’s standard with a requirement that the party who experienced the loss must demonstrate that there is a low probability that the PHI has been compromised.  In order to qualify under this new standard, the party must perform a risk assessment, taking into account at least the four factors outlined in the regulation.  These factors are found in § 164.402(2):

(i) The nature and extent of the protected health information involved, including the types of identifiers and the likelihood of re-identification;

(ii) The unauthorized person who used the protected health information or to whom the disclosure was made;

(iii) Whether the protected health information was actually acquired or viewed; and

(iv) The extent to which the risk to the protected health information has been mitigated.

So, let’s evaluate some typical hypothetical scenarios that involve the loss of PHI.  The most common reported PHI breach involves data backup tapes that are lost.  By design, a data backup tape is usually the entire database of patient records, because this entire dataset would normally be required to restore the data from the backup.

Under the first factor, such a loss would militate towards breach notification, because the dataset would almost certainly include patient identifiers and, if the backup was of an electronic health record, extensive health information on each patient.  Under the second factor, if the tape was merely lost, there is no determination of who might have had unauthorized access to the PHI.  If, for example, the backup tape was just simply lost by a contractor that stores the backup tapes in a vault for retrieval on demand, this factor might lean towards not making a notification.  On the other hand, if the tape was in the trunk of the network administrator’s car, and the car was stolen, this factor might lean towards making a notification.

As to the third factor, a lost data tape alone, without more information, would not inform us whether the data was actually acquired by anyone, or viewed by someone.  There is certainly the potential that a lost tape could be viewed, assuming that the person that obtained it had access to a compatible tape drive.  But based on what we know, this factor is probably neutral.

As to the fourth factor, the question here is whether the backup tape itself was encrypted, or was stored in a locked storage box.  A tape that is encrypted is much harder to access, even if the tape was intentionally stolen to obtain unauthorized access to PHI.  A tape in a locked storage box that was merely lost may be less likely to be accessed by an unauthorized user.  So this factor may swing either way based on what, if any, mitigations were in place to protect the data on the backup tape.

If we assumed that no mitigations were in place, the overall analysis would lean towards breach notification under the new rule.  As you can see, however, the facts and circumstances matter greatly in evaluating whether a breach has occurred that requires notification.

Changes in HIPAA Compliance

The HiTech Act set in motion a series of changes to Health Insurance Portability and Accountability Act (“HIPAA”) compliance for covered entities and business associates in 2009, which were followed by interim regulations issued by the department of Health and Human Services (“HHS”).  HHS has issued a final regulation that goes into effect on March 26, 2013, and requires compliance within 180 days by all covered entities and business associates.

The HiTech Act made a number of important changes to the law governing the security and disclosure of protected health information.  First, prior to HiTech, business associates of covered entities were not required to comply with the security rules and standards set forth in the HIPAA security regulations.  HiTech changed the applicability of the security regulations to include business associates.  The final regulation from HHS implements this provision of the HiTech Act.

Second, prior to HiTech, there was no federal requirement that a covered entity or business associate report a security breach that resulted in the disclosure of protected health information (“PHI”).  HHS subsequently issued interim regulations to implement these notification requirements, and as of March 26, 2013, HHS issued final regulations that alter the assumptions and exceptions to what constitutes a “breach” under HIPAA.

Business Associates are Covered Entities when it comes to PHI

HiTech initially changed the law governing PHI by requiring that business associates comply with the same security regulations that govern covered entities.  The final regulations with HHS clarify which security rules also apply to business associates under section 164.104 and 164.106, including those applicable rules found in Parts 160 and 162.  However, HHS also expanded the definition of “business associate” to include subcontractors of business associates that handle PHI on behalf of the business associate for the covered entity.  The regulation does provide certain narrow exceptions to who is now covered in the definition of a “business associate,” including an exception for “conduits” of PHI that may, on a transitory basis, transmit PHI but would not access the PHI except on a random or infrequent basis.  But the regulation appears to generally expand further the legal responsibilities, and potential liability, for members of the industry that work even indirectly for covered entities.

For existing health care providers, now might be the time to revisit your business associate agreement with your business associates, such as your EHR vendors.  Section 164.314 establishes certain requirements for these agreements, including provisions that all business associates comply with the full security rule, that subcontractors to business associates also comply with the full security rule, and that business associates provide the covered entity with security incident reporting in the event of a breach at the business associate’s or subcontractor’s facility or systems.

Changes in Security Breach and Notification

HiTech also introduced a breach notification provision which was intended to require covered entities to report to HHS, and where appropriate, to patients affected by a security breach involving their PHI.  The final regulations have modified the definition of a “breach” by establishing the assumption that an unauthorized access of PHI is a breach unless it can be demonstrated by the covered entity or business associate that there is a low probability that the PHI has been compromised.

Such a demonstration requires that the covered entity or business associate conduct a risk assessment and evaluate at a minimum the four factors described in the regulation: “(i) the nature and extent of the protected health information involved, including the types of identifiers and the likelihood of re-identification, (ii) the unauthorized person who used the protected health information or to whom the disclosure was made, (iii) whether the protected health information was actually acquired or viewed, and (iv) the extent to which the risk to the protected health information has been mitigated.”

Altering the burden and requiring a covered entity or business associate to engage in this risk assessment is likely to increase the number of breach notifications required under the final regulation.

The final regulation includes a variety of other changes in requirements for covered entities and business associates not discussed in this article, such as sale and marketing of PHI, use of genetic information for insurance underwriting, notices to patients of privacy practices, and disclosure of PHI to friends and families of decedents.  Providers should promptly examine their privacy and security policies to ensure compliance with the final regulations.

Data Breach: No Joke

As recently noted by the New York Times in this article, a lot of health data for nearly 11 million people has been inadvertently disclosed in violation of patient privacy.  Electronic health records systems alone are not to blame, as readers will note that the improper disposal of paper medical records in dumpsters has happened more than once (23 reports are noted on the HHS website of data breaches exposing 500 or more paper patient records in one way or another from 2009-2010).  However, computer databases make it easier to disclose larger amounts of health data than in the paper records days of yore.  As a part of the American Recovery and Reinvestment Act of 2009, Congress enacted federal reporting requirements in the event of a data breach by a covered entity.  For the entire law, click here: ARRA Enrolled Bill.

Section 13402 provides the statutory basis for requiring a covered entity to report to the Secretary of Health and Human Services when the security of protected health information is breached.  Both individual notice to the persons affected by the data breach, and public notification via the local media is required when more than 500 individual’s information has been lost due to a breach.  In addition, the covered entity is required to advise the Secretary in the event of a breach in excess of 500 individuals (if less than that, the entity can keep a log and submit it at the end of the year).

Patients may suffer identity theft and public embarrassment when their health information is lost by a covered entity.  And, if the breach is substantial enough, the covered entity may lose patients and clinical revenue as a result.  Health care providers can reduce the possibility of such data losses by having strong policies and internal database controls that limit access and portability of data by its employees and contractors.  Unfortunately, the problem of data loss (whether by accident or because of hacking) appears to not be improving, in spite of a number of sentinel events in the last few years, including the loss of a laptop with health data on over 20 million veterans served by the Veterans Administration.

Preparing for Disasters – Practical Preparedness

Disasters happen in the world, some of which may directly affect your organization.  Preparing for disasters, whether they be hurricanes, tornadoes, terrorists, hackers, power outages, fires, or earthquakes, means thinking about: (a) how your business operates today, (b) how your business would likely operate in the event of a disaster, (c) and developing some kind of testable plan for recovering from a variety of disasters that is practical but well-designed.  Preparedness is also a commitment to ongoing planning and the investment of a certain amount of resources each budget period to the process, because your plan will evolve with the extent and scope of your business as it changes over time.

In Maryland, there are not specific ethics rules that require lawyers to prepare for disasters, though common sense would tell an attorney that missing a deadline because of a disaster is still a missed deadline, and the loss or inadvertent disclosure of confidential client information is still a loss whether or not caused by a natural disaster or simple human error.   Both circumstances can lead to an ethics complaint from a disgruntled client.  For attorneys, there are a number of resources available from the ABA to help firms do a better job of preparing for a disaster.

Doctor’s offices that are joining the electronic health record system revolution because of the incentives under ARRA, also will need to have a plan for disaster recovery.  The HIPAA security regulations include standards for preparing for recovering from disasters (45 CR § 164.308(a)(7) is addressed specifically to contingency planning for covered entities and business associates).  The security regulations are cloaked in terms of “reasonableness,” which means that a covered entity’s disaster recovery planning efforts should be commensurate with the amount of data and resources it has.  So, a practice of two physicians that sees 8,000 patient visits a year is not expected to have its data available in three DR hot sites.  But, if you are a major insurance carrier, three DR hot sites might not be enough for your operation.  However, in neither case is no plan an acceptable answer.  Nor is a plan that has never been tested.

Risk Assessment

So where do you start?  The logical starting point is a risk assessment of your existing systems and infrastructure (also required of covered entities under the HIPAA security rules in section 164.308(a)(1)).  A risk assessment will guide you through gathering an inventory of your existing systems, and help to identify known and potentially unknown risks, along with the likelihood that such a risk will be realized and what you are doing now (if anything) to mitigate that risk.  The risk assessment will also help you to categorize how critical a system is to your operations, and will also identify severe risks that remain unmitigated.  This resulting list helps you to come up with a starting place for the next step: doing something about it.

The Disaster Plan

In parallel, you can also use the inventory of your existing systems and risks to develop a disaster recovery plan.  First, you now have a list of your critical systems which are your highest priority to recover in the event of a failure.  Second, you also have a list of likely risks to those systems with the likelihood based in part on your past experience with a particular disaster.  These lists help you to identify what you need to protect and what you need to protect from.  The other two questions you need to ask for each system are: (a) how much data can I stand to lose in the event of a disaster? and (b) how long can I wait to have my system restored to normal operations?

This analysis of your existing systems, risks, and business requirements will help lead the practice to a plan that includes procedures for how to function when systems are unavailable, and how to go about restoring an unavailable system within the business requirements of the practice.  Once you have your plan, and have implemented the systems or policies required by the plan, your next step is to test the plan.  Table top exercises allow you, in a conference room, to walk through the staffing, procedures, and possible issues that may arise as a result of a particular disaster scenario.  Technical testing permits your IT staff to make sure that a disaster recovery system works according to the expected technical outcomes.  Full blown testing is to actually simulate a disaster, perhaps during non-business hours, and actually run through the disaster plan’s procedures for operations and IT.

Hypothetical

As an example, suppose that you have an electronic health record system.  This is a critical system based on the risk assessment.  In the last five years, you have had a virus that partially disabled your records system causing an outage for two business days, and you have had your database crash, causing you to lose a week’s worth of data.  You have implemented two mitigations.  The first is anti-virus software that regularly updates for definitions and regularly scans the system for viruses and removes them.  The second is a backup system that makes a backup of your system’s data on a weekly basis and stores the data in a separate storage system.

Based on interviews with the practice staff and owner, the records system is used as a part of patient care.  During normal business hours, an outage of the system can result in patients being re-scheduled, and also creates double work to document kept visits on paper and again in the record system when it becomes available.  The practice has indicated that the most it can be without the system is a single business, and the most data that it can lose from this system is the most recent 4 hours of data entry (which can be reconstructed by the clinical staff that day).

You then evaluate the mitigations in place today that allow for a system recovery in the event of a likely disaster (virus or database crash based on the past experience of the practice).  The backup system today only runs once per week, which means that a crash of virus that occurred later in the week would result in more than 4 hours of lost data.  Recovery from the backup device to a new server also appears to require more than a business day, because the practice has no spare server equipment available.  So you would have to start over with the existing server (installing the operating system, database software, and then restoring the data from the backup), or purchase a new server and have it delivered to complete the restore.

The conclusion here is that while there is an existing mitigation for recovery from a likely disaster, the mitigation does not meet the business requirements of the practice.

Budget for New Sufficient Mitigations

Once you have your list of unmitigated or insufficiently mitigated risks, the next step is to look for mitigations that you could implement on your network.  A mitigation might be a disaster recovery system or service, or it might be some other service or product that can be purchased (like anti-virus software, a hardware warranty, a staff person, etc.).  At this point, the help of a technical consultant may be required if you don’t have your own IT department.  The consultant’s role here is to advise you about what you can do and what the likely costs are to purchase and implement the solution which will meet your business requirements based on your likely risks for disasters.

Once sufficient solutions have been identified, the next step is to purchase a solution and implement it.  From there, testing is key as noted above.  An untested plan is not much of a plan.

 

 

Disaster Recovery and the Japanese Tsunami

The art of disaster recovery is to plan for what may be the unthinkable while balancing mitigations that are both feasible and reasonable for your organization’s resources and circumstances.  On March 11, Japan was struck by a massive earth quake and tsunami that caused enormous destruction, estimated at a total loss of $310 billion.  Over the last several weeks, one of the major failures has been at the nuclear power complex in Fukushima, home to six nuclear power plants.  This disaster continues, as of the writing of this post, as at least two of the plants continue to be in a critical state because of a failure of the complex’s power and backup power systems that helped to control the temperature of the nuclear fuel rods used to generate power at the plants.

As an unfortunate consequence, many people have been exposed to more radiation than normal, food grown in the area of the plant has shown higher levels of radioactive materials than normal, radioactive isotopes in higher-than-normal concentrations have been detected in the ocean near the plants, and numerous nuclear technicians have been exposed to significant radiation, resulting in injuries and hospitalizations.  As far as disasters go, the loss of life and resources has been severe.  And like other major environmental and natural disasters, the effects of the earthquake and tsunami will be felt for years by many people.

Natural disasters like this one cannot be prevented.  We lack the technology today to effectively predict or control for these kinds of events.  And while these larger scale disasters are relatively rare, planners still need to assess the relative likelihood of such events, and develop reasonable mitigation plans to help an entity recover should such a disaster occur.  Computerized health records present an opportunity to permit recovery in that the data housed by these systems can be cost-effectively backed up and retained at other secure locations, permitting system recovery and the ability to continue operations.  In contrast to digital files, paper records are far less likely to be recovered were a tsunami or other similar natural disaster to occur and wash the records away.

Even the best recovery plan, however, will be severely tested should a major disaster be realized.  Japan was hardly unprepared for a major earthquake, and still is struggling to bring its nuclear facilities under control nearly three weeks later.  However, having a plan and testing it regularly will increase the odds of recovery.  My thoughts are with the Japanese during these difficult times.

Disaster Recovery Planning

I had the pleasure recently to present to a group of IT and business leaders on the topic of disaster recovery.  Based on some of the questions and feedback from the group, I thought I would add some comments on this topic on the blog.

First, a fair number of attendees commented that they were having a hard time explaining the need for disaster recovery, or obtaining the necessary resources (either staff time, money, or both) to implement a solution.  Of the attendees, only a handful reported they had completed the implementation of a disaster recovery solution.  I think these are common problems for many organizations that are otherwise properly focused on meeting the computing needs of their user community.  Disasters generally happen infrequently enough that they do not remain a focus of senior management.  Instead, most businesses focus on servicing their customer base and generating revenue, and addressing the day to day issues that get in the way of these things.

Second, one of the attendees properly emphasized that IT staff are an important part of the planning equation.  Without qualified and available staff, a disaster recovery system will not produce the desired outcome – a timely and successful recovery, no matter how expensive the system itself costs.

Third, at least one attendee indicated that they had implemented a solution with a service provider, but the solution was incomplete for the organization’s recovery needs.  This is also a common problem for organizations that have significant changes in their systems over time, but disaster recovery is not included in the new system acquisition process.

Disaster recovery as a concept should not be introduced as an IT project, in spite of the fact that there are important IT components to any disaster recovery plan.  Instead, disaster recovery is a mindset.  It should appear on the checklist of items to consider for organizational decisions, along with other considerations like “how will this project generate revenue?” and “how will this project impact our commitment to protecting customer data?”

Disaster recovery solutions are more than just another virtual server or service.  Disaster recovery is another insurance policy against the uncertainty of life.  Organizations routinely purchase liability insurance, acts and omissions insurance, and other insurance policies on the basis that unanticipated negative events will inevitably occur.  System failures, computer viruses, and other environmental failures are inevitable, even if rare.  Disaster recovery solutions are a hedge against these unfortunate events.

Risk assessments for information systems help organizations to quantify their exposure to the unknown, and to estimate the potential impact to the organization if a threat is realized.  Risk assessments also provide an orderly way to prioritize system recoveries, so that a disaster recovery solution focuses on mitigating the largest risks to the most critical information systems.  As was pointed out at the presentation, payroll systems often seem the most critical systems, but the mitigations for the unexpected failure of a payroll system may not be a computer solution at all.  Instead, the organization may elect to simply pay employees cash based on their last pay check, and reconcile payments once the payroll system is available again.

Cloud Computing and Other Buzz Words

The technology that drives health care today is changing in response to increase concerns about security and reliability, and external regulations like the security regulations in HIPAA.  In addition, the HiTech portion of the stimulus law this year has provided incentives for health care providers to adopt technology that allows for health data exchange and for quality reporting (which is a data driven process for providing outcome reporting for certain quality measures as defined by the Secretary of Health and Human Services).  There are a fair number of technology vendors that provide electronic health records (EHR) systems today, and also a fair number of vendors that have developed business intelligence or more sophisticated data reporting tools.  Health data exchange is a newer field; google and Microsoft have begun developing systems that allow users to establish a personal health record database, and some states have started planning for larger scale data repositories, but this concept is still at its beginning stages.

A buzz word today in technology is “cloud computing,” which is a fancy way of describing internet systems that businesses can rent from service providers to perform business tasks.  The idea is not new, even if the buzz word is; in days of yore, we called these “application service providers” or ASP’s for short.  I suppose that the IT marketing folks got sick of being compared with a nasty snake and thought clouds were better (or maybe more humorous if they had ever read Aristophanes).  Of course, the perjorative “vaporware” which roughly translates to a software vendor that markets a product it does not yet have to actually sell to people, also rings of clouds and things in the sky.  And the old “pie in the sky” as a way of saying “that’s a nice idea but has no hope of being useful down here where mere mortals live” could also relate to clouds.

That aside, there may be something to cloud computing for us mere mortals.  One of the important aspects of technology is how complex it actually is under the covers, and the degree and scope of support actually required to get the technology to work properly.  Larger businesses that have high concentrations of technology engineers and analysts are better equipped than the average business to deal with technology issues.  In this respect, cloud computing offers a business a way to “leverage” (another business term thrown casually around) the expertise of a fair number of technology experts without having to hire all of them on full time.  One of the dilemmas for business consumers, however, is the amount that one needs to be able to trust the technology partner they rent from.  This is the same problem that ASP’s originally faced years ago.  What happens to the data in the cloud when the cloud computing vendor either stops providing the service you are using, or just goes out of business?  How do the businesses work together on transitioning from one cloud to another, or from the cloud back in-house?  What if the business wants to host its own cloud onsite or at its existing hosting facility?  How are changes to the hosted application controlled and tested?  How often are backups performed, how often are they tested?  How “highly available” is the highly available system hosted?  How are disasters mitigated and what is the service provider’s disaster recovery/business continuity plan?  How are service provider staff hired and what clearance procedures are employed to ensure that staff aren’t felons that regularly steal identities?  The list of issues is a long one.

The other dilemma for businesses that want to use cloud computing services is that many of these services have a standard form contract that may not be negotiable, or essential parts of it may not be negotiable.  For example, most cloud computing vendors have hired smart attorneys who have drafted a contract that puts all the liability on the customer if something goes wrong, or otherwise limited liability so severely that the business customer will need to buy a considerable amount of business insurance to offset the risks that exist with the cloud, should it ever fail, rain, or just leak into the basement.

On the other hand, businesses that have their own IT departments have the same set of risks.  The difference, I think, is that many businesses do not have liability contracts with their otherwise at-will IT staff.  So, if things go horribly wrong (e.g., think “negligence”), the most that might happen to the IT person responsible is immediate termination (except in cases of intentional property theft or destruction, both of which may lead to criminal but not automatic civil liability for the IT person involved).  How much time does a business have to invest to develop and implement effective system policies, the actual systems themselves, and the staff to maintain those systems?

The advent of more widely adopted EHR systems in the U.S. will likely heat up the debate over whether to use cloud computing services or virtualized desktops that are hosted centrally by a hosting company in order to roll out the functionality of these systems to a broader base of providers (currently estimated at 1 in 5 presently using some EHR).  Companies that can cost less than the Medicare benefit to providers while helping providers comply with the security regulations will likely have the most success in the next few years.  Stay tuned!

Health IT & Open Source

The truth is that I may just be getting annoyed about this debate.  A recent blog posting on Wired (click here for the article) frames the debate over health technology in terms of open source versus legacy or proprietary code, the latter being the enemy to innovation, improved health outcomes, and usability.

First off, an open source program is merely governed by some version of the GPL, which means that other developers can reverse engineer, make derivate works, or otherwise include your open source code in their subsequent open source code.  Developers that freely work together to write something cool are developers writing code.  They aren’t necessarily health experts, physicians, efficiency gurus; in fact, they may not even have health insurance if they live in the U.S. (1 in 6 of us are uninsured).  The fact that code is open source does have a big impact on how U.S. copyright law protects the work, but it doesn’t mean that somehow an open source developer is more in tune with health IT requirements, how to best integrate the system into a physician’s practice, or even necessarily what the actual requirements are for a physician to see a patient and document the visit to avoid liability for fraud or malpractice.  That’s because for developers, requirements come from outside of the development community, from users.

And guess what – proprietary developers of software listen to their user community to understand their requirements.  It’s part of the job of developers, regardless of whether the code is open source or proprietary.  And, for everyone participating in the global economy, the people that pay for your product generally drive the features and functionality in it.  If you can’t deliver, then your user base will go find someone else who can deliver.

Now, for larger health organizations, health records systems are a multi-year investment.  This inherently locks that health organization into a longer term, and more conservative, relationship with their health IT vendor, which tends to reduce the amount of change introduced into a health records system over time – especially for the larger vendors that have a lot of big clients.  The little developer out there writing code at 3am is certainly going to respond to market changes far more quickly than a really big corporation with a health IT platform.  But you know what?  Try getting the little guy to support your 500 desktop installations of his software 24×7.  Do you really think he can afford to staff a help desk support function around the clock for your business?  What happens when he has two customers with emergencies?  Or he wants to get some sleep?  And what about change control?  Even big vendors stumble in testing their code to make sure it works and is secure before releasing it (think Microsoft).  Solo, open source developers, even working in informal teams, are going to miss at least as often as a larger vendor, and introducing a lot more changes just increases the frequency that an untested change becomes an “unpublished feature” aka “blue screen of death.”  Trust me on this one: the health care user base is not going to be very tolerant of that.

Repeatedly, I hear the refrain that this stimulus money is going to go to systems that can be put to a “meaningful use,” and that is going to exclude rogue open source Health IT developers from being funded, squelching innovation in the market place.  I imagine that complying with the security regulations under HIPAA probably hinder innovation, too, but they increase the reliability of the system vendors that remain in the market place and reduce the risk to the data of patients that might be in their computer systems.  Setting minimum standards for health records systems may favor incumbent systems, but honestly – is that so wrong?  Isn’t the trade off here that when someone buys a system that is certified, they can have the satisfaction of knowing that someone else without a vested interest in the product, thought it had certain features or a proven record of delivering certain outcomes?  Perhaps the certifiers aren’t neutral because they come from the industry of EHRs, but if I recall correctly, the people that run the internet have committees with representatives from the internet industry, yet I rarely hear that the standards for the POP3 protocol unfairly burden new or open source developers.

That someone set standards for EHRs like a government agency is a lot like the government setting the requirements for you to receive a driver’s license.  Everyone who drives needs to understand what the red, octogonal sign with the capital letters S T OP means.  On the other hand, you may never parallel park again, but you better learn how to do it if you want your license to drive in Maryland.   Standards are always a mixed bag of useful and not-so-useful rules, but I don’t think there are too many people out there arguing that the government should not set minimum standards for drivers.  A certification requirement for EHRs to establish minimum standards is no different.  Ask the JCAHO people about it.  Ask the HIPAA police.  Ask the IT people you know.  If you are going to develop an EHR, you better secure it, make sure the entries in the database are non-repudiatable, and have a disaster recovery approach.  Don’t know what these things are?  Do your homework before you write a computer system.

Now, another refrain has been that look at how all of these proprietary systems have failed the world of health provisioning.  For example, look at how more kids died at the Children’s Hospital ER in Pittsburg after the hospital implemented an EHR (I can feel a class action lawsuit in federal court).  Who implements EHR’s in ER’s?  So the doctor is standing there and a patient is having a heart attack.  What should the doctor’s first act be?  To register the patient into the EHR and record his vitals?  I would think the doctor should be getting out the paddles and worrying about the patient’s heart beat, but then, I am an attorney and systems guy, not a physician.  Look – dumb decisions to implement a computer system should not lead to subsequent critics blaming the computer system for not meeting the requirements of the installation.  EHR is not appropriate every place patients are seen or for every workflow in a health care provider’s facility.  No knock on the open source people, but I don’t want my ER physician clicking on their software when I am dying in the ER, either.  I don’t want my doctor clicking anything at all – I want her to be saving me.  That’s why I have been delivered to the ER.

Now, VistA is getting a lot of mileage these days as an open source, publicly funded, and successful example of EHR in action.  And it is free.  But in fairness, VistA is not a new piece of software recently written by three college kids in a garage somewhere in between World of Warcraft online gaming sessions.  This program has been in development for years.  And “free” is relative.

For example, if you want support, you need to pay for it.  If you want to run it in a production environment, you will need to buy equipment and probably get expert help.  If you want to implement it, you will need to form a committee, develop a project plan, implement the project intelligently with input from your users, and be prepared to make a lot of changes to fit this system (or any system) into your health facility’s workflows.  And if you find yourself writing anything approaching software, that will cost you something, too, as most health care providers do not have a team of developers available to them to modify any computer system.  So, “free” in this context is relative, and genuinely understates the scope and effort required to get any piece of software to work in your facility.  “Less” may be a more appropriate adjective.  But then, that’s only true if you can avoid costly modifications to the software, and so far, there is no single EHR system that works in every setting, so expect to make modifications.

That’s my rant.  Happy EHR-ing!