Final HIPAA Security Regulations and EHRs

Note: this article was originally published in Maryland Physician Magazine in its May/June 2013 issue.

The HiTech Act in 2009 set in motion a series of changes to the HIPAA rules that govern the use, disclosure and protection of protected health information (“PHI”).  The Department of Health and Human Services (“HHS”) subsequently issued interim regulations in response to these changes in the law, and this year issued a final regulation as of March 26, 2013 that requires compliance by covered entities and business associates within 180 days.  These final HIPAA security regulations make a number of important changes which may impact your relationship with vendors that provide you with electronic health record (“EHR”) licensing and support.

First, prior to HiTech, business associates of covered entities were not required to comply with the security rules and standards set forth in the HIPAA security regulations.  HiTech changed the applicability of the security regulations to include business associates.  The final regulation from HHS implements this provision of the HiTech Act, but with a twist: subcontractors to business associates are also defined as business associates within the final regulation.  What this means is that EHR vendors and their subcontractors must fully comply with the HIPAA security rules, not just with “reasonable” security measures.

Second, prior to HiTech, there was no federal requirement that a covered entity or business associate report a security breach that resulted in the disclosure of protected health information (“PHI”).  HHS subsequently issued interim regulations to implement these notification requirements, and as of March 26, 2013, HHS issued final regulations that alter the assumptions and exceptions to what constitutes a “breach” under HIPAA.  In addition, business associates and subcontractors are obligated to report security breaches to covered entities.

For providers that are at the beginning of their search for an EHR vendor, have an attorney review any proposed contract between your organization and the vendor to ensure that the business associate provisions comply with the final regulations.  If you already have an existing relationship, work with your attorney to ensure that the contract in place complies with the final regulatory requirements.  All business associate agreements must come into compliance with the final regulations by September, 2014.

In recent years, some EHR vendors have moved to “cloud”-based data storage and access solutions for their clients.  These cloud systems are designed so that provider data collected by the EHR is stored at a remote data center, and made available over an internet connection with the provider.  Some EHR vendors subcontract with a third party to provide the cloud data storage.  More likely than not, that subcontractor is now a business associate under the final regulations and takes on the same obligations as the EHR vendor with regards to your data.  The final regulations require that a covered entity’s contract with their business associate require subcontractor compliance with the final security regulations.

Beyond compliance issues, providers will want to evaluate whether an EHR vendor that hosts your data in the “cloud” has really made sufficient provisions for security.  Such an evaluation makes good business sense because of the incredibly negative consequences of any security breach that results in a loss of PHI for a health care provider.  For example, does the vendor comply with a recognized, national security standard (like NIST)?  Is the EHR vendor, or the data center it uses for storing your data, audited against a SAS standard like SAS-70?  What are the security practices and security devices in place at the EHR vendor to protect your data?  If the vendor will host your data, what are its disaster recovery and data backup procedures?  Are those procedures regularly tested?

Providers and their counsel should also evaluate what, if any, additional provisions should be negotiated into any final agreement with the EHR vendor concerning the vendor’s compliance with a security standard, commitment to security procedures, and related obligations (such as maintaining appropriate border security and/or appropriate encryption for data during its transmission).

The changes in HIPAA compliance mean that providers cannot simply treat EHR vendors as a “black box” into which providers place PHI, and rely on the EHR vendor’s representations that they know best regarding security.  In addition, because the scope of HIPAA now covers more than just covered entities and business associates, but also most subcontractors of business associates that handle PHI, more entities are at risk for substantial fines for failing to comply with the applicable security standards.  All providers should work with their counsel to analyze and address compliance with the final regulations.

Reported PHI Breaches

The Department of Health and Human Services (“HHS”) maintains an online list of covered entities and business associates that have experienced PHI breaches where more than 500 individual patient records were involved.  As of the writing of this post, a total of 572 reported breaches are listed on this website.  What can we learn from this information?

First, the dataset covers breaches reported from September, 2009 through February, 2013.  A total of more than 21 million patient records are listed on this report (though it is likely there is some duplication of patient records between data breaches reported here).  These incidents total less than the single data loss reported by the Department of Veterans Affairs in 2006 when a single laptop was stolen from an employee’s home that contained in excess of 26 million records.  Nonetheless, a significant amount of PHI has been lost or stolen and reported to HHS over the last three and a half years.

Second, the most common scenarios for PHI breaches are tape backups that are lost, followed by theft.  Almost 6 million patient records were affected by this kind of data loss.  The theft or loss of a laptop came in fourth, affecting about 2.3 million patient records.  Theft generally accounted for more than one third of all records compromised, followed next by loss (which probably includes scenarios like we accidentally put the backup tapes in the dumpster, or the tape fell out of my bag between the office and my car), also accounting for about one third of all records compromised.  Hacking appears down the list, affecting a total of 1.3 million patient records.

Third, a little more than half of data breaches appear to involve a business associate of a covered entity in terms of patient records breached.  However, only 92 of the 572 data breaches note a business associate’s involvement, which tends to suggest that when a business associate is involved, more records on average are affected by the data breach.  This is consistent with the expectation that technology vendors like those that implement and/or host electronic health records often do so for more clients and are a bigger target for data theft or hacking and computer viruses.

With the change in breach notification in the final HIPAA regulations recently issued by HHS, it will be interesting to see if there are more breach notifications published to HHS’ web site.

Changes in HIPAA Breach Notification Rule

HHS recently released the final regulations that revise certain provisions of HIPAA, including the HIPAA breach notification rule.  Congress, in enacting the HiTech Act in 2009, included a statutory requirement that covered entities report breaches that involved the unauthorized access or loss of protected health information (“PHI”).  HHS then promulgated an interim rule to implement this statutory provision.  That interim rule required reporting of the breach under the “significant risk of financial, reputational or other harm” standard.  Criticism was subsequently leveled at this standard as being too subjective.  HHS just recently issued its final rule (effective on March 26, 2013) that changes the breach reporting rule in two ways.

First, if there is a breach that involves PHI, and the breach does not fall within a regulatory exception, the presumption of the regulation is that the breach must be reported.  This means that a party that experiences a loss of PHI cannot assume, on the grounds that the loss was uncertain to cause significant harm to the patients, that notification of the breach was not required.

Second, the final regulation replaces the interim rule’s standard with a requirement that the party who experienced the loss must demonstrate that there is a low probability that the PHI has been compromised.  In order to qualify under this new standard, the party must perform a risk assessment, taking into account at least the four factors outlined in the regulation.  These factors are found in § 164.402(2):

(i) The nature and extent of the protected health information involved, including the types of identifiers and the likelihood of re-identification;

(ii) The unauthorized person who used the protected health information or to whom the disclosure was made;

(iii) Whether the protected health information was actually acquired or viewed; and

(iv) The extent to which the risk to the protected health information has been mitigated.

So, let’s evaluate some typical hypothetical scenarios that involve the loss of PHI.  The most common reported PHI breach involves data backup tapes that are lost.  By design, a data backup tape is usually the entire database of patient records, because this entire dataset would normally be required to restore the data from the backup.

Under the first factor, such a loss would militate towards breach notification, because the dataset would almost certainly include patient identifiers and, if the backup was of an electronic health record, extensive health information on each patient.  Under the second factor, if the tape was merely lost, there is no determination of who might have had unauthorized access to the PHI.  If, for example, the backup tape was just simply lost by a contractor that stores the backup tapes in a vault for retrieval on demand, this factor might lean towards not making a notification.  On the other hand, if the tape was in the trunk of the network administrator’s car, and the car was stolen, this factor might lean towards making a notification.

As to the third factor, a lost data tape alone, without more information, would not inform us whether the data was actually acquired by anyone, or viewed by someone.  There is certainly the potential that a lost tape could be viewed, assuming that the person that obtained it had access to a compatible tape drive.  But based on what we know, this factor is probably neutral.

As to the fourth factor, the question here is whether the backup tape itself was encrypted, or was stored in a locked storage box.  A tape that is encrypted is much harder to access, even if the tape was intentionally stolen to obtain unauthorized access to PHI.  A tape in a locked storage box that was merely lost may be less likely to be accessed by an unauthorized user.  So this factor may swing either way based on what, if any, mitigations were in place to protect the data on the backup tape.

If we assumed that no mitigations were in place, the overall analysis would lean towards breach notification under the new rule.  As you can see, however, the facts and circumstances matter greatly in evaluating whether a breach has occurred that requires notification.

Changes in HIPAA Compliance

The HiTech Act set in motion a series of changes to Health Insurance Portability and Accountability Act (“HIPAA”) compliance for covered entities and business associates in 2009, which were followed by interim regulations issued by the department of Health and Human Services (“HHS”).  HHS has issued a final regulation that goes into effect on March 26, 2013, and requires compliance within 180 days by all covered entities and business associates.

The HiTech Act made a number of important changes to the law governing the security and disclosure of protected health information.  First, prior to HiTech, business associates of covered entities were not required to comply with the security rules and standards set forth in the HIPAA security regulations.  HiTech changed the applicability of the security regulations to include business associates.  The final regulation from HHS implements this provision of the HiTech Act.

Second, prior to HiTech, there was no federal requirement that a covered entity or business associate report a security breach that resulted in the disclosure of protected health information (“PHI”).  HHS subsequently issued interim regulations to implement these notification requirements, and as of March 26, 2013, HHS issued final regulations that alter the assumptions and exceptions to what constitutes a “breach” under HIPAA.

Business Associates are Covered Entities when it comes to PHI

HiTech initially changed the law governing PHI by requiring that business associates comply with the same security regulations that govern covered entities.  The final regulations with HHS clarify which security rules also apply to business associates under section 164.104 and 164.106, including those applicable rules found in Parts 160 and 162.  However, HHS also expanded the definition of “business associate” to include subcontractors of business associates that handle PHI on behalf of the business associate for the covered entity.  The regulation does provide certain narrow exceptions to who is now covered in the definition of a “business associate,” including an exception for “conduits” of PHI that may, on a transitory basis, transmit PHI but would not access the PHI except on a random or infrequent basis.  But the regulation appears to generally expand further the legal responsibilities, and potential liability, for members of the industry that work even indirectly for covered entities.

For existing health care providers, now might be the time to revisit your business associate agreement with your business associates, such as your EHR vendors.  Section 164.314 establishes certain requirements for these agreements, including provisions that all business associates comply with the full security rule, that subcontractors to business associates also comply with the full security rule, and that business associates provide the covered entity with security incident reporting in the event of a breach at the business associate’s or subcontractor’s facility or systems.

Changes in Security Breach and Notification

HiTech also introduced a breach notification provision which was intended to require covered entities to report to HHS, and where appropriate, to patients affected by a security breach involving their PHI.  The final regulations have modified the definition of a “breach” by establishing the assumption that an unauthorized access of PHI is a breach unless it can be demonstrated by the covered entity or business associate that there is a low probability that the PHI has been compromised.

Such a demonstration requires that the covered entity or business associate conduct a risk assessment and evaluate at a minimum the four factors described in the regulation: “(i) the nature and extent of the protected health information involved, including the types of identifiers and the likelihood of re-identification, (ii) the unauthorized person who used the protected health information or to whom the disclosure was made, (iii) whether the protected health information was actually acquired or viewed, and (iv) the extent to which the risk to the protected health information has been mitigated.”

Altering the burden and requiring a covered entity or business associate to engage in this risk assessment is likely to increase the number of breach notifications required under the final regulation.

The final regulation includes a variety of other changes in requirements for covered entities and business associates not discussed in this article, such as sale and marketing of PHI, use of genetic information for insurance underwriting, notices to patients of privacy practices, and disclosure of PHI to friends and families of decedents.  Providers should promptly examine their privacy and security policies to ensure compliance with the final regulations.

Meaningful Use Stage 2 Regulations Released

The Meaningful Use Stage 2 proposed rule has been released earlier this week.  You can download a copy of the full 455 page regulation here: MU Stage 2 Proposed Rule.  For those keeping score at home, there are three stages of “meaningful use” as that term is defined in section 495.6 of the regulations.  Stage 1 set certain Core (required) and Menu (pick from the list to implement) Criteria, and established minimum compliance metrics for a “eligible professional” to qualify for the federal incentives.  The original regulations that defined “meaningful use” indicated that there would be future changes to the definition in two more stages.  We initially expected Stage 2 to be defined for compliance in 2013.  However, the regulations have pushed out compliance for Stage 2 to 2014.  This article will take a look at what’s been proposed for Stage 2.

First off, there are more “Core” or required Criteria in Stage 2.  Stage 1 had a total of 15 Core Criteria, some of which any certified electronic health record would have to meet (such as collecting certain demographic and vital signs data for patients seen in the office).  In addition, there were several Core criteria that, when originally published, no one had yet defined how you might actually comply.  For example, there is a Core Criteria in Stage 1 where providers were required to submit certain quality data to either CMS or their State Medicaid program.  But, no one had indicated when the regulations were published what data, exactly, or how this data was to be provided.  The metric in Stage 1 was merely the ability to submit a test file.

Stage 2 has 17 total Core Criteria.  In several cases, CMS has proposed to terminate a prior Stage 1 Core item entirely in Stage 2.  And in a number of cases, Criteria that were previously on the “Menu” in Stage 1 are now incorporated as Stage 2 Core Criteria.  For example, structured lab data, patient lists by specific condition for use in a quality improvement initiative, patient reminders, patient access to electronic health information, patient education resources, medication reconciliation for transition of care, care summary for patients transitioned to another provider, and data submission to an immunization registry were all Menu Criteria in Stage 1 and are now Core Criteria in Stage 2.

Also, where a Stage 1 Criteria was kept, the minimum compliance percentage has increased, in some cases substantially, in Stage 2.  For example, where a 50% compliance rate was sufficient for Stage 1 for collecting patient smoking status, in Stage 2, the compliance rate minimum is 80%.  In Stage 1, a single decision support rule needed to be implemented for compliance.  In Stage 2, five such rules must be implemented.

As for the Menu Criteria, Stage 1 required that you implement 5 of the 10 on the list as an eligible provider.  In total, therefore, a provider had a total of 20 Criteria that had to be met to achieve meaningful use.  In Stage 2, there are only 5 menu criteria, and the provider must meet at least three.  So the total number of required criteria is no different, but providers have fewer menu criteria to choose to comply with.  In addition, the Menu Criteria in Stage 2 include three interfaces with specific state or public health registries, and the remaining two involve access to imaging results in the EHR and storing family health history in a structured data format.  You may be able to waive out of some of these if there isn’t a way in your state to submit surveillance or other registry data electronically.  However, if you elect to implement one of these interfaces, the compliance requirement under Stage 2 is full year data submission to the registry (not just submitting a test file).  If you plan on doing one of these, start early to make sure you can get to the compliance target by 2014.

Overall, Stage 2 appears to “up the game” for providers who wish to continue to receive incentive payments in out years of the program.  The Stage 2 rules that were published this week are interim rules.  The public has 60 days to submit comments.  After that, CMS will ultimately publish a final rule, taking into account comments made during the comment period.  While it is possible that CMS may back down on some of these measures, providers should get plan to comply with much of this Rule.  Talk with your EHR vendor, consultant, MSO or other service providers to analyze and plan for compliance.

Living in the Cloud(s)

I wrote about cloud computing in an earlier post and discussed some of the general pros and cons involved with the idea.  For attorneys, doctors and other professionals that are regulated, cloud computing creates some new wrinkles.  For attorneys, protecting the confidences of clients is an ethical obligation.  The unauthorized disclosure of client secrets can lead an attorney to disciplinary action and disbarment.  For physicians and other health care providers, federal laws on the privacy of patient information put providers at risk for substantial fines for inappropriately disclosing patient health information (or otherwise not complying with HIPAA’s privacy and security rules).  Using the cloud for applications that might have such confidential information adds a layer of uncertainty for the practitioner.

On the other hand, cloud computing is coming to a practice near you whether you like it or not.  For example, an increasing number of attorney practice management systems are cloud-based, such as Clio.  Legal research tools like FastCase, LexisNexis, Westlaw and Google Scholar are all cloud-based systems (in the sense that the information being searched is not stored on your local network but in internet-based database repositories that you access through your web browser).  And a growing number of email providers, including Google Apps for Business,, and others have been providing cloud-based email solutions for custom domain names.

State bar ethics groups and the ABA have been working on ethics opinions about these cloud-based systems.  North Carolina’s Bar had initially proposed a restrictive rule on the use of cloud computing systems by attorneys in the state.  The NC Bar had suggested that the use of web-based systems like (which allows clients to complete a questionnaire online for specific legal documents which are reviewed by an attorney before becoming final) represented a violation of the state’s ethics rules.  However, the NC Bar later revised its opinion and indicated that cloud computing solutions can be acceptable, so long as the attorney takes reasonable steps to minimize the inadvertent disclosure of confidential information.  “Reasonable,” a favorite word of attorneys for generations, has the virtue and vice of being subject to interpretation.  However, given the pace of change of technology, a bright line rule that favors one system over another faces prompt obsolescence.

In the context of the NC Bar 2011 Formal Opinion 6, for software as a service providers, ethics considerations include: (a) what’s in the contract between the vendor and the lawyer as to confidentiality, (b) how the attorney will be able to retrieve data from the provider should it go out of business or the parties terminate the SAAS contract, (c) an understanding of the security policy and practices of the vendor, (d) the steps the vendor takes to protect its network, such as firewalls, antivirus software, encryption and intrusion detection, and (e) the SAAS vendor’s backup and recovery plan.

Can you penetrate past the marketing of a vendor to truly understand its security practices?  For example, Google does not even disclose the total number of physical servers it uses to provide you those instant search results (though you can learn where its data centers are – there is even one in Finland as of the writing of this article – here).  And, in spite of Google’s security vigilance, Google and the applications it provides have periodic outages and hack attacks, such as the Aurora attack on gmail that became known in 2010.  Other data centers and service providers may be less transparent concerning these security issues.  In some cases, the opacity is a security strategy.  Just as the garrison of a castle wouldn’t advertise its weak spots, cloud providers aren’t likely to admit to security problems until either after the breach is plugged, or the breach is irreparable.

What’s your alternative?  For you Luddites, perhaps paper and pencil can’t be hacked, but good luck if you have a fire, or a disgruntled employee dumps your files in a local dumpster for all to see one weekend.  For those of you that want computer system in your practice, can you maintain these systems in-house in a cost-effective manner?  Do you have the resources to keep up with the software and hardware upgrades, service contracts, backup & recovery tests, and security features to reasonably protect your data?  How does that stack with professional-grade data centers?  Are you SAS-70 or SAS-16 compliant?  Do you know how data you access is encrypted?  In functional terms, do you really exercise more effective control over your security risks if you have IT people as employees rather than a data center under a reasonable commercial contract?

There are a lot of considerations.  And the best part?  They keep changing!

Data Breach: No Joke

As recently noted by the New York Times in this article, a lot of health data for nearly 11 million people has been inadvertently disclosed in violation of patient privacy.  Electronic health records systems alone are not to blame, as readers will note that the improper disposal of paper medical records in dumpsters has happened more than once (23 reports are noted on the HHS website of data breaches exposing 500 or more paper patient records in one way or another from 2009-2010).  However, computer databases make it easier to disclose larger amounts of health data than in the paper records days of yore.  As a part of the American Recovery and Reinvestment Act of 2009, Congress enacted federal reporting requirements in the event of a data breach by a covered entity.  For the entire law, click here: ARRA Enrolled Bill.

Section 13402 provides the statutory basis for requiring a covered entity to report to the Secretary of Health and Human Services when the security of protected health information is breached.  Both individual notice to the persons affected by the data breach, and public notification via the local media is required when more than 500 individual’s information has been lost due to a breach.  In addition, the covered entity is required to advise the Secretary in the event of a breach in excess of 500 individuals (if less than that, the entity can keep a log and submit it at the end of the year).

Patients may suffer identity theft and public embarrassment when their health information is lost by a covered entity.  And, if the breach is substantial enough, the covered entity may lose patients and clinical revenue as a result.  Health care providers can reduce the possibility of such data losses by having strong policies and internal database controls that limit access and portability of data by its employees and contractors.  Unfortunately, the problem of data loss (whether by accident or because of hacking) appears to not be improving, in spite of a number of sentinel events in the last few years, including the loss of a laptop with health data on over 20 million veterans served by the Veterans Administration.

Preparing for Disasters – Practical Preparedness

Disasters happen in the world, some of which may directly affect your organization.  Preparing for disasters, whether they be hurricanes, tornadoes, terrorists, hackers, power outages, fires, or earthquakes, means thinking about: (a) how your business operates today, (b) how your business would likely operate in the event of a disaster, (c) and developing some kind of testable plan for recovering from a variety of disasters that is practical but well-designed.  Preparedness is also a commitment to ongoing planning and the investment of a certain amount of resources each budget period to the process, because your plan will evolve with the extent and scope of your business as it changes over time.

In Maryland, there are not specific ethics rules that require lawyers to prepare for disasters, though common sense would tell an attorney that missing a deadline because of a disaster is still a missed deadline, and the loss or inadvertent disclosure of confidential client information is still a loss whether or not caused by a natural disaster or simple human error.   Both circumstances can lead to an ethics complaint from a disgruntled client.  For attorneys, there are a number of resources available from the ABA to help firms do a better job of preparing for a disaster.

Doctor’s offices that are joining the electronic health record system revolution because of the incentives under ARRA, also will need to have a plan for disaster recovery.  The HIPAA security regulations include standards for preparing for recovering from disasters (45 CR § 164.308(a)(7) is addressed specifically to contingency planning for covered entities and business associates).  The security regulations are cloaked in terms of “reasonableness,” which means that a covered entity’s disaster recovery planning efforts should be commensurate with the amount of data and resources it has.  So, a practice of two physicians that sees 8,000 patient visits a year is not expected to have its data available in three DR hot sites.  But, if you are a major insurance carrier, three DR hot sites might not be enough for your operation.  However, in neither case is no plan an acceptable answer.  Nor is a plan that has never been tested.

Risk Assessment

So where do you start?  The logical starting point is a risk assessment of your existing systems and infrastructure (also required of covered entities under the HIPAA security rules in section 164.308(a)(1)).  A risk assessment will guide you through gathering an inventory of your existing systems, and help to identify known and potentially unknown risks, along with the likelihood that such a risk will be realized and what you are doing now (if anything) to mitigate that risk.  The risk assessment will also help you to categorize how critical a system is to your operations, and will also identify severe risks that remain unmitigated.  This resulting list helps you to come up with a starting place for the next step: doing something about it.

The Disaster Plan

In parallel, you can also use the inventory of your existing systems and risks to develop a disaster recovery plan.  First, you now have a list of your critical systems which are your highest priority to recover in the event of a failure.  Second, you also have a list of likely risks to those systems with the likelihood based in part on your past experience with a particular disaster.  These lists help you to identify what you need to protect and what you need to protect from.  The other two questions you need to ask for each system are: (a) how much data can I stand to lose in the event of a disaster? and (b) how long can I wait to have my system restored to normal operations?

This analysis of your existing systems, risks, and business requirements will help lead the practice to a plan that includes procedures for how to function when systems are unavailable, and how to go about restoring an unavailable system within the business requirements of the practice.  Once you have your plan, and have implemented the systems or policies required by the plan, your next step is to test the plan.  Table top exercises allow you, in a conference room, to walk through the staffing, procedures, and possible issues that may arise as a result of a particular disaster scenario.  Technical testing permits your IT staff to make sure that a disaster recovery system works according to the expected technical outcomes.  Full blown testing is to actually simulate a disaster, perhaps during non-business hours, and actually run through the disaster plan’s procedures for operations and IT.


As an example, suppose that you have an electronic health record system.  This is a critical system based on the risk assessment.  In the last five years, you have had a virus that partially disabled your records system causing an outage for two business days, and you have had your database crash, causing you to lose a week’s worth of data.  You have implemented two mitigations.  The first is anti-virus software that regularly updates for definitions and regularly scans the system for viruses and removes them.  The second is a backup system that makes a backup of your system’s data on a weekly basis and stores the data in a separate storage system.

Based on interviews with the practice staff and owner, the records system is used as a part of patient care.  During normal business hours, an outage of the system can result in patients being re-scheduled, and also creates double work to document kept visits on paper and again in the record system when it becomes available.  The practice has indicated that the most it can be without the system is a single business, and the most data that it can lose from this system is the most recent 4 hours of data entry (which can be reconstructed by the clinical staff that day).

You then evaluate the mitigations in place today that allow for a system recovery in the event of a likely disaster (virus or database crash based on the past experience of the practice).  The backup system today only runs once per week, which means that a crash of virus that occurred later in the week would result in more than 4 hours of lost data.  Recovery from the backup device to a new server also appears to require more than a business day, because the practice has no spare server equipment available.  So you would have to start over with the existing server (installing the operating system, database software, and then restoring the data from the backup), or purchase a new server and have it delivered to complete the restore.

The conclusion here is that while there is an existing mitigation for recovery from a likely disaster, the mitigation does not meet the business requirements of the practice.

Budget for New Sufficient Mitigations

Once you have your list of unmitigated or insufficiently mitigated risks, the next step is to look for mitigations that you could implement on your network.  A mitigation might be a disaster recovery system or service, or it might be some other service or product that can be purchased (like anti-virus software, a hardware warranty, a staff person, etc.).  At this point, the help of a technical consultant may be required if you don’t have your own IT department.  The consultant’s role here is to advise you about what you can do and what the likely costs are to purchase and implement the solution which will meet your business requirements based on your likely risks for disasters.

Once sufficient solutions have been identified, the next step is to purchase a solution and implement it.  From there, testing is key as noted above.  An untested plan is not much of a plan.



Disaster Recovery and the Japanese Tsunami

The art of disaster recovery is to plan for what may be the unthinkable while balancing mitigations that are both feasible and reasonable for your organization’s resources and circumstances.  On March 11, Japan was struck by a massive earth quake and tsunami that caused enormous destruction, estimated at a total loss of $310 billion.  Over the last several weeks, one of the major failures has been at the nuclear power complex in Fukushima, home to six nuclear power plants.  This disaster continues, as of the writing of this post, as at least two of the plants continue to be in a critical state because of a failure of the complex’s power and backup power systems that helped to control the temperature of the nuclear fuel rods used to generate power at the plants.

As an unfortunate consequence, many people have been exposed to more radiation than normal, food grown in the area of the plant has shown higher levels of radioactive materials than normal, radioactive isotopes in higher-than-normal concentrations have been detected in the ocean near the plants, and numerous nuclear technicians have been exposed to significant radiation, resulting in injuries and hospitalizations.  As far as disasters go, the loss of life and resources has been severe.  And like other major environmental and natural disasters, the effects of the earthquake and tsunami will be felt for years by many people.

Natural disasters like this one cannot be prevented.  We lack the technology today to effectively predict or control for these kinds of events.  And while these larger scale disasters are relatively rare, planners still need to assess the relative likelihood of such events, and develop reasonable mitigation plans to help an entity recover should such a disaster occur.  Computerized health records present an opportunity to permit recovery in that the data housed by these systems can be cost-effectively backed up and retained at other secure locations, permitting system recovery and the ability to continue operations.  In contrast to digital files, paper records are far less likely to be recovered were a tsunami or other similar natural disaster to occur and wash the records away.

Even the best recovery plan, however, will be severely tested should a major disaster be realized.  Japan was hardly unprepared for a major earthquake, and still is struggling to bring its nuclear facilities under control nearly three weeks later.  However, having a plan and testing it regularly will increase the odds of recovery.  My thoughts are with the Japanese during these difficult times.

Disaster Recovery Planning

I had the pleasure recently to present to a group of IT and business leaders on the topic of disaster recovery.  Based on some of the questions and feedback from the group, I thought I would add some comments on this topic on the blog.

First, a fair number of attendees commented that they were having a hard time explaining the need for disaster recovery, or obtaining the necessary resources (either staff time, money, or both) to implement a solution.  Of the attendees, only a handful reported they had completed the implementation of a disaster recovery solution.  I think these are common problems for many organizations that are otherwise properly focused on meeting the computing needs of their user community.  Disasters generally happen infrequently enough that they do not remain a focus of senior management.  Instead, most businesses focus on servicing their customer base and generating revenue, and addressing the day to day issues that get in the way of these things.

Second, one of the attendees properly emphasized that IT staff are an important part of the planning equation.  Without qualified and available staff, a disaster recovery system will not produce the desired outcome – a timely and successful recovery, no matter how expensive the system itself costs.

Third, at least one attendee indicated that they had implemented a solution with a service provider, but the solution was incomplete for the organization’s recovery needs.  This is also a common problem for organizations that have significant changes in their systems over time, but disaster recovery is not included in the new system acquisition process.

Disaster recovery as a concept should not be introduced as an IT project, in spite of the fact that there are important IT components to any disaster recovery plan.  Instead, disaster recovery is a mindset.  It should appear on the checklist of items to consider for organizational decisions, along with other considerations like “how will this project generate revenue?” and “how will this project impact our commitment to protecting customer data?”

Disaster recovery solutions are more than just another virtual server or service.  Disaster recovery is another insurance policy against the uncertainty of life.  Organizations routinely purchase liability insurance, acts and omissions insurance, and other insurance policies on the basis that unanticipated negative events will inevitably occur.  System failures, computer viruses, and other environmental failures are inevitable, even if rare.  Disaster recovery solutions are a hedge against these unfortunate events.

Risk assessments for information systems help organizations to quantify their exposure to the unknown, and to estimate the potential impact to the organization if a threat is realized.  Risk assessments also provide an orderly way to prioritize system recoveries, so that a disaster recovery solution focuses on mitigating the largest risks to the most critical information systems.  As was pointed out at the presentation, payroll systems often seem the most critical systems, but the mitigations for the unexpected failure of a payroll system may not be a computer solution at all.  Instead, the organization may elect to simply pay employees cash based on their last pay check, and reconcile payments once the payroll system is available again.