Health Information Exchange & Sharing Your Health Data

The ARRA (American Recovery and Reinvestment Act) provides incentives for qualifying health care providers that implement health IT systems in the next few years.  Among the requirements for receiving the incentive is that the provider can demonstrate the health IT system can “is connected in a manner that provides, in accordance with law and standards applicable to the exchange of information, for the electronic exchange of health information to improve the quality of health care.”  There is substantial incentive, therefore, for health providers to implement systems that can effectively exchange data with other systems through a Health Information Exchange (HIE).

ARRA is not the first statute to push the exchange of information in the health care market.  In fact, HIPAA, when it was originally implemented in 1996, provided authority for the Secretary of Health and Human Services to establish data exchange guidelines for claims and eligibility data with health insurers.  These standard formats, as defined by ANSI, pushed the health industry into an era of electronic data exchange with most health insurers.  Of course, what’s on a claim form to the insurance company is not the same as the kind and extent of the data that would be available in a health IT system like an electronic health record (EHR).  The clinical data sent to insurers – the patient’s diagnosis – is short hand in comparison to the significant amount of clinical information collected on a patient like lab results, patient histories, or reports from specialists.  And consistent storage of this information in EHR’s is in shorter supply in comparison to diagnosis data in their practice management system cousins.  Even the patient medication list, which is typically stored as structured data in most health records, may not necessarily be stored in a consistent format across EHRs.

HIE systems today have a substantial uphill battle ahead of them to be able to collect and meaningfully display data across a variety of information systems, so that consumers of this data will be able to use it in a meaningful way.  There is substantial pressure on the health market, however, to improve efficiency.  Today the U.S. health market struggles with effectively managing the care of patients, partly because of the amount of data available on patients, and the amount that is redundant but inconsistent.  For patients with significant health problems, a visit to a variety of medical professionals results in a fair number of disparate documents about the patient with a variety of sometimes conflicting information about the patient.  A patient taking 5 or 6 different prescriptions may forget one when asked by one specialist; different physicians may end up ordering redundant tests for the same patient; patients seeking narcotics may be able to play physicians off of each other.  HIE systems present a possible solution to the problem of securely sharing information between health care providers that serve the same patient.

Therefore, as incentives and pressures are placed on the market to improve efficiencies, I would anticipate that some of the technical issues with exchanging health information will be resolved.  That leaves a number of other areas to be more completely addressed, including patient privacy, the quality of data and the ability to trust the source of the data, and backup and redundancy.

The Privacy Problem

One of the great challenges for the HIE movement is maintaining patient privacy.  HIPAA was originally implemented in part to specifically address patient privacy, as have a number of other state laws on this topic (for example, the Maryland Medical Records Act, See Md. Health-Gen. Code Ann. § 4-301 et seq.).  And other states are getting in on the action to protect consumer privacy, including Massachusetts, Minnesota, and Nevada, just to name a few.

However, laws alone may not be enough to effectively regulate and protect the availability of health data.  In the present HIPAA enforcement regulations (which were modified by ARRA this year), the top fines (where the act in violation of the security regulations was a negligent rather than an intentional one) are relatively low compared to the potential size of an HIE (for example, if a company like google or Microsoft was to become a dominant HIE) because the fines are a flat rate per incident rather than being scaled according to the company’s gross revenue or the severity of the breach or finding.  The ARRA did move in the right direction this year by implementing a four-tiered approach to violations from the original enforcement authority under HIPAA, but further scaling may be required for this to become an effective deterrent to lax security practices.

Furthermore, having a patchwork of privacy laws increases the overall cost of compliance for HIEs, which increases the cost to implement these systems without necessarily improving the actual security of the information stored at the HIE.  This is caused by overlapping regulation along with the expense of responding to multiple authorities with the right to audit or investigate the HIE (as larger HIEs will undoubtedly operate across state lines).  Sadly, I imagine that this problem will probably get worse before it gets better, given the number of relatively autonomous sovereign powers within our country (5o states + the federal government) and the scope and scale of the privacy issue being considered.

I say that because of the amount of data that will likely become available within HIEs across the nation that will eventually be the health data for all 300 million of us.  Assuming that the typical patient’s chart is between 5 and 10 megabytes (with images and other pdf attachments that are not as small as document stored within a data table), the total data storage for all citizens would be between 1,500 and 3,000 terabytes – or about the total storage capacity of about 30,000 new Macbooks.  For comparison, in 2006, Google’s estimated storage of data for its entire operation was about 850 terabytes for storing information on about 24 billion web pages.  It is a lot of data, and a lot to manage.  In today’s fractured regulations, there will be substantial governmental interest in further regulating this data in the next few years.  However, without more consistent regulations, patient privacy may not be effectively protected.

Changing Attitudes Towards Privacy

Our future privacy policies may also be impacted by the attitude of our youth to privacy today.  Social networking sites, for example, allow for the exposure of a lot of information about the youngest among us, but the predominant users of these systems don’t seem to mind very much.  Now, of course, facebook is not designed for users to post their most recent blood sugar levels, so who knows whether college kids would treat that information in the same manner that they treat pictures snapped of them by the college paparazzi at the fraternity Friday night bash, but it stands to reason that the next generation’s attitudes towards privacy will be substantially different than the present one that has been called to govern the nation.

The result may be a reduction in the emphasis on privacy with an increasing criminal penalty for those that engage in theft of information.  For example, perhaps instead of worrying as much about whether health data is squirreled away in an underground bunker with Dick Cheney, the future leaders of the nation will make this data generally available via the internet, ultimately reducing its value to would-be thieves.  For myself, I can’t say it matters much if others know than I have high cholesterol and a family history of diabetes, but I also don’t think there is much stigma attached to either of these conditions as there might have once been (or might still be for other health issues).

Data Quality and Trusted Sources

HIEs will also need to address head on the quality and reliability of data stored in their databases.  Today, data systems do not generally go beyond the initial setup of some kind of private network and the file formats that are acceptable for data to be exchanged.  Inherently, one system trusts the data it receives from the other and merely re-publishes it into its own database, identifying the source of the data.  Usernames and passwords may just not be enough for everyone to know that the data being sent or received is accurate and reliable.

In addition, HIPAA (and some other laws) have placed a small emphasis on technical encryption, and the result is that little has been done with these technologies for most systems today to ensure that data entered is not repudiated later by the person that purportedly entered it.  For example, many commercially available database systems are not natively encrypted.  Local area network activity on the wire is rarely encrypted, as database systems rely on border security devices to keep outsiders out of LAN activity.  Passwords are not consistently complex across an enterprise (especially where multiple database systems maintain their own passwords and accounts), and certainly cannot reasonably be changed frequently enough to ensure the password has not been compromised (without the user community revolting against the IT staff).  And users routinely share passwords in spite of the numerous repeated messages from system administrators to not do so.

Furthermore, data exchanged between systems relies on the initial configuration of the networking that connects the two systems to remain uncompromised.  There is no further system verification to ensure that messages actually received across these systems are correct in the typical data exchange design.  TCP itself was designed with a checksum in each packet, but that only tells the receiver if the packet received matches what was intended to be sent by the source device, not whether the data sent is coming from the human/system source alleged (e.g., the laboratory technician or physician that actually created the entry in the first place).

I anticipate that the future of authentication will be to move towards far more sophisticated and multi-level authentication (even though the biometric movement seems to have lost steam, at least in the general consumer market).  For example, instead of or in addition to a username/password, systems may also generally implement a token, or other physical card to grant access (such systems exist and are in general use today for some systems).  Other security measures may involve thumbprints or biometrics.  I would also imagine that more sophisticated encryption algorithms could be used beyond 128-bit cipher, and that encryption might occur at a more basic level than it does today (if transmissions are encrypted at all).  For example, databases themselves may be encrypted at a record or table level, or application access could be managed through an encrypted socket instead of plain text as many operate now.

Beyond user access to put in data, surely there be could some additional layer of verification that could occur once data has been received from a producer system which could be, by design, independently verified before being committed to the receiving system.  The alteration (or just erroneous entry) of data in transport from one system to another creates the real possibility of a bad health care decision by professionals using the data.  This is certainly one of the major weaknesses of consumer level HIEs such as those from google or Microsoft which must rely on the consumer to enter their own lab and pharmaceutical information into the database when that data is not available electronically, or on data providers that rely on administrative or clerical staff to actually do the data entry without further review before distribution.

HIE Backup and Disaster Recovery

Today, a number of technologies exist that allow for data backup and redundancy to ensure that systems can be highly available and resistant to significant environmental or system disasters.  One category of technology that addresses redudancy is called cloud computing, which is a kind of modern equivalent to what application service providers (ASP) of the 1990’s were offering, or what the ancient mainframes of yesteryear offered to computing users back in the bad old days of the 1970’s.  What is fundamentally different today, however, is the possibility of having massively redundant and distributed information systems that belong to a cloud, where both ASPs and mainframe computing were often centralized into one server room or series of server rooms in one facility.

A common example of computing in the cloud today is gmail, which is an email service provided by google for free to consumers.  There are still, somewhere, servers connected to the internet and controlled by google that will respond to SMTP requests, but google most likely has these servers distributed all over the planet and connected to a larger, redundant network infrastructure.  Data stored on these servers are likely real-time replicated so that all gmail replication partners are up to date, regardless of which one you actually connect to when you use your web browser to navigate to your email account.  Gmail has been around for some time now, and there are a fair number of users (26 million according to one article as of last September; wikipedia claims there are 146 million gmail users each month as of July 2009).

However, even gmail has outages, in spite of the sophistication of its backup and redundancy.  These outages are inconvenient to email users, but could be fatal if relied upon for data in emergency rooms.  And local EHRs undoubtedly fail more often than much larger, hosted solutions.  Perhaps the incentives in the market for HIEs and EHRs will push us into a new age of reliability in IT, based on cloud computing ‘2.0’.

Future is Fuzzy

While it is not clear what may happen as more data is available, I can say that the amount of money on the table under the ARRA, in state budgets and privately in the hands of organizations like Microsoft and Google is pushing health information exchanges into the forefront of health IT initiatives.  More information being available and shared that is accurate and adequately protected is very likely to improve health outcomes and increase the efficient delivery of health care.  My hope is that we can solve some of the more nagging technical and privacy concerns in the short term.

Implementing Your Electronic Health Record System

Health IT has been put back into the forefront of the Obama national health care initiative, in part because of financial incentives built into the ARRA for health care providers that implement and meaningfully use a health technology system in the next few years. The cost savings are premised in part on the success of the installation and implementation of the information system to be used by health care providers. This article will focus on some of the details of implementing an electronic health records system, along with some of the pitfalls that can keep a project from being completed successfully.

The End Goal is Meaningful Use
In order to receive reimbursement from the Medicare or Medicaid program, the ARRA requires that a provider demonstrate meaningful use of the system, connection to a health data exchange, and submission of data of clinical quality measures for patients at the practice. (See my blog for more details here) Reaching these goals goes beyond the mere technical installation of some computer system; meaningful use in particular will likely require health care providers to show that they actually use the computer system in managing patient care, reducing errors, and improving health outcomes for individual patients.

Getting there requires effective planning for the project and a productive implementation process.

The good news for providers who want to implement an electronic health record is that: (a) the data a provider needs to effectively see patients will be available when you need it (no more lost chart syndrome), (b) the chart documentation will support the diagnosis and E&M codes billed to the insurer, (c) electronic health records can be tightly integrated with a practice management system to reduce data entry errors and improve billing, (d) most electronic health records will make clinical or mandated reporting easier as compared to paper charts, (e) lab results can be electronically imported into the electronic health record from major lab providers, (f) improved E&M coding can lead to better reimbursement, and (g) an electronic health record investment can be viewed by your staff as an investment in them, leading to higher staff retention rates and satisfaction. But there is a cost to achieving these benefits.

For one, some of the office workflows for handling patient care may need to be modified or adjusted to incorporate the electronic health record. Some workflows that operate on paper in an office will not convert efficiently to a computer system. Forms used to process or document patient care may also need to be modified when they are converted into the electronic health record.  Electronic health record installations for health care providers tend to expose workflow problems and breakdowns that require attention in implementation for the project to be successful.

Secondly, all the staff in the office will need to be computer literate, and generally, physicians and other health care providers will need to be able to use a computer effectively while examining their patients. This has become less of an issue as more doctors and other providers are trained to use a variety of computer systems at medical school, but computer literacy is still a major issue for some practices in the nation.

Third, electronic health record projects are high risk – there is a substantial chance that the project will be derailed for any number of reasons, including a lack of a process for effectively making key decisions, office politics, the capital expense to acquire computer hardware and software, and a lack of technical expertise among the implementation team, among other challenges. These can be overcome or at least mitigated by sufficient advanced planning by the organization.

And finally, most studies of electronic health record installations suggest that your practice will be in the minority of practices using an electronic health record (though there has been an improvement in the market penetration here over the last few years). This is partly because of the expense of implementing the systems, and the longer-term costs of maintaining them.

You can get there if you have a good plan.

Manage Expectations Early and Often
No, an electronic health record will not solve your workflow problems without your help. An electronic health record is not free, even if licensed under an open source software license. The data that is collected in the electronic health record is useful, but will require further technical assistance to be useful for research or analysis. Staff can’t keep doing things the same way and expect a different outcome (besides this being one definition of insanity, electronic health records are not magical beasts with wings, and magical thinking does not lead to a happy end user). Doctors won’t be able to see 50 patients per day after install if they were only able to manage 20 per day before. A project that lacks goals that are attainable will fail.

Any system project can be a victim of unreasonable or unrealistic expectations. Those leading the project need to be frank about what can be achieved and at what cost to the staff using the electronic health record. Expectations can be managed by establishing tangible goals and having a workable project plan with real milestones and a clear assessment of the resources (financial and staff time) that will be needed to reach each one. For example, implementing the electronic health record two months from purchasing it can be realistic, but only if the provider’s office is prepared to commit significant time to the planning and installation, particularly in identifying forms that need to be developed electronically and lab interfaces that need to be installed (two of the most time-expensive portions of an electronic health record implementation). The need for effective training can also not be understated – staff should not expect they can pick up use of the system in an hour or two, or learn as they go with live patients in the room.

Picking an Information System
Finding the right electronic health record is an important task and should not be left to chance. There are a lot of electronic health record vendors in the market place today with a variety of installations, history, and effectiveness. Developing a written request for proposal and requiring an objective process for evaluating responses to the RFP is essential to fairly evaluate the vendors in the market place. Sending the RFP out to 100 vendors is also not helpful, nor is having a 100 page requirements section. But your prospective partner for this project should be able to effectively respond to your RFP and explain in satisfactory detail what the options and costs are for implementing the proposed system.

Furthermore, your organization should form a search committee that is comprised of enough staff to provide meaningful input on the responses to the RFP, and to interview qualified vendors to assess for the needs of the essential practice areas. Vendors should also be able to competently demonstrate their project to the committee’s satisfaction, so that the committee can identify the best two candidates for the job.

To help encourage staff buy-in (where your facility is sufficiently large that the search committee may not represent all interests), I have also recommended that the finalists demonstrate their product to all staff, and to put the final decision to a group vote. This doesn’t work in all organizations, but the more effort you put into including the staff that use the system in the process, the more buy-in to the project you will garner, which increases the odds of a successful implementation.

Vendor Negotiations
Once you have identified the best candidate electronic health record, your organization should begin to examine the terms of the contract with the electronic health record vendor. Most vendors have a standard form contract that describes the terms of the relationship, particularly for ongoing support and updates to the product. These contracts are complicated and an attorney can be helpful to ensure that the contract fairly represents the relationship, costs, and promises made by the vendor along the way.

Negotiations can take some time to complete, particularly where multiple parties are involved or there are substantial costs involved. Hammering out contract details with the vendor is an important step in the planning process.

Major Milestones
Once a vendor has been chosen, most electronic health record implementation project plans will have the following major milestones to get to a successful go live: (a) form a planning committee, (b) form a technical team, (c) review and make decisions on the requirements for the project, (d) install the server, software, and workstation software, (e) develop all required clinical content (such as electronic forms, flowsheets, and data requirements) for go live, (f) implement all interfaces for data flowing in and out of the electronic health record, (g) conversion of all charts from paper into the electronic health record, (h) staff training completed, and (i) go live with the system.

The planning committee should include the clinical departments that will be using the system, and should be designed to regularly meet up to and through the go live date. The committee should be charged with enough authority to make decisions about the project’s implementation, and should become your initial group of super-users or staff with more training about the electronic health record. Your super users should then become sources of information for the rest of the staff as they work through integrating the electronic health record into their practice.

The technical team is comprised of the IT staff that are responsible for installing the server and workstation equipment, getting the electronic health record software and database installed properly, configuring interfaces between systems, and installing any supporting network or peripheral technology. This team should regularly report to the planning committee or the project manager for the installation.

The planning committee is responsible for making the decisions about how the electronic health record will be implemented. The vendor supplying the system should regularly participate in the committee’s meetings, and generally the project manager should chair the committee. Actions and decisions of this committee should be documented and distributed to the members. In my experience, the meetings of the committee or geared toward training the members on the details of the electronic health record so that they can determine how the system should work for their departments. These meetings can be contentious as a number of people will need to agree, but in the longer term, this process helps to make sure that the project is implemented appropriately.

This committee also should be responsible for identifying project priorities. The reality is that no electronic health record implementation can go live with every request ready – there are always too many requests and not enough time to implement all of them. This committee should be prepared to identify what’s most critical and clarify these priorities to the staff involved in the installation.

In addition, this committee should be committed to be thorough and address concerns along the way with specific implementation decisions and priorities. Some decisions made early on can be very time consuming and costly to correct later.

The clinical content of the application includes the electronic forms that will be used to document care, the organization of the sections of the electronic health record that display structured data (such as lab results for a patient), and other functional areas of the electronic health record that are susceptible to modification at implementation. This development may be handled by the vendor. However, post-go live may require the provider to maintain the content developed during implementation, or be in a position to add new content. In some cases, third parties may be able to sell pre-made clinical content separately from the electronic health record vendor. All of this customization of the product requires special attention to ensure that the content developed meets user requirements and that the content is developed according to standards acceptable to standard practice.

Most electronic health records support some interfacing with other products, using a common language like HL7. If interfaces with other software or third parties is essential to the implementation, substantial lead time and attention to detail is required for these interfaces to be ready at the go live date for the project.

Some meaningful portion of the existing paper charts will need to be converted to electronic format into the electronic health record, prior to go live if at all possible. This is a very time-intensive process, and is often used as a training opportunity for users, who can be scheduled to convert specific charts as part of learning how to use the electronic health record. However, most practices have many more charts than users available to convert them, and many project planners will budget additional resources to aid in the paper conversion process.

Some practices opt to extract specific data from a paper chart into electronic format, using specialized clinical content for this purpose. Other practices may simply scan and index the paper chart documents as is into an electronic document and attach it to the chart as the chart history. Still others will do a hybrid of these two solutions.

Training is also a very important aspect of any electronic health record implementation. From my experience, up to 20 hours of training may be required for super users of the electronic health record; the minimum is about 4 hours for sufficient exposure to the basics of an electronic health record. Depending on the total staff to be trained, scheduling training classes for an organization may be a substantial time commitment. Generally the electronic health record vendor can give guidelines on the minimums for training to gain proficiency on the system. Note that no implementation’s training will end at go live; generally post go-live training and ongoing training for new staff after the system is implemented are ongoing expenses of the electronic health record.

Greening IT Through Virtualization

Technology continues to evolve, providing people with new functionality, features, information, and entertainment.  According to Ray Kurzweil, a number of metrics for computer performance and capacity indicate that our technology is expanding at a linear or exponential rate.  Sadly, the physical manifestations of technology are also helping to destroy the planet and poison our clean water supplies.  According to the EPA, nearly 2% of municipal waste is computer trash.  While an improvement in recent years, only 18% of computers, televisions, and related solid waste is actually recycled by consumers, placing millions of tons of unwanted electronics into landfills each year.  Businesses contribute to this problem each year as they are major consumers of computers, printers, cell phones, and other electronics to operate their business.

Computers that are placed into a landfill pose a significant environmental threat to people and wildlife.  Electronics can contain a number of hazardous materials, such as lead, mercury, cadmium, chromium, and some types of flame retardants, which, in the quantities of disposed equipment, poses a real threat to our drinking water.  See the article here with the details. Lead alone in sufficient quantities can damage your central nervous system and kidneys, and heavy metals in your body will be retained such that over time you accumulate more of the substance until your body reaches a threshold over which you may experience fatal symptoms.  See Lead Poisoning Article. Mercury, cadmium and chromium aren’t any nicer to people or animals.

Everyone should recycle their electronics through a respectable electronics recycler (See Turtle Wings website for example).  However, you can also reduce your server fleet and extend the life of your computer equipment through virtualization.  (See an earlier post on virtualization on my blog).  Virtualization of your server equipment means that you will use fewer physical servers in order to present more virtual machines to your user community for accessing print, authentication, file sharing, applications, web, and other computer services on your network.  Fewer servers in use means that you will have fewer physical server devices to purchase over time and fewer servers to recycle at the end of their life.  Virtualizing your desktops can help by extending the useful life of your desktops (they are just accessing a centrally stored virtual desktop, on which all the processing and storage occurs, so a desktop with little RAM and CPU will work for longer), and also reducing the amount of electricity that your organization uses per computer (if you then switch to a thin client such as a Wyse terminal or HP computing device).

Virtualization can also improve your preparedness for disasters, whether by flood, virus, or terrorist.  For one thing, backing up the data file that represents your virtual servers is easier, can be done during normal business hours, and can be far more easily replicated to another site than the contents of a physical server.  Furthermore, virtualization can reduce the entry costs to implement a disaster recovery site because you can use less overall equipment in order to replicate data from your production environment, so your ongoing operating costs are reduced as compared to a physical server configuration.  Testing upgrades is easier because you can duplicate a production virtual server and test the upgrade before rolling it out to the live system (which costs less than buying another physical server and running a copy of the system on it to run the testing).  Virtualizing desktops also simplifies some of the support and administrative tasks associated with keeping desktops running properly (or fixing them when they stop working right).

So, before you buy another physical desktop or server, think about whether virtualization can help save Earth and you.

Trademark Infringement: Starbucks v. Wolfe Borough’s Coffee

Starbucks is a well known, international purveyor of coffee products, with thousands of stores throughout the world.  Starbucks v. Wolfe’s Borough Coffee, Inc., No. 01 Civ. 5981 (LTS)(THK), 2005 U.S. Dist. LEXIS 35578 (S.D.N.Y. Dec. 23, 2005) (Starbucks I).  Starbucks Corporation was formed in 1985 in Washington State, after the original founders had been in business for themselves since 1971 in the Seattle Pike’s Place Market.  Id. at 3. Under a traditional trademark analysis, Starbucks has spent a substantial amount of money to market its coffee products worldwide (over one hundred thirty-six million dollars worth from 2000-2003).  Id. at 5.  One should not use a trademark similar to “Starbucks” without expecting trouble.

In 2004, Wolfe’s Borough Coffee, a small coffee manufacturer that distributes its brands in a store in New Hampshire and through some New England supermarkets, was sued by Starbucks in the southern district of New York for trademark infringement and dilution under the Lanham Act and state law.  Id. at 6.  Wolfe’s Borough Coffee was trading with two allegedly infringing names: “Mr. Charbucks” and “Mister Charbucks,” both similar to the trademark “Starbucks” used by the famous coffee house of the same name.  Starbucks v. Wolfe’s Borough Coffee, Inc., 559 F. Supp. 2d 472 (S.D.N.Y. June 5, 2008) (Starbucks III).  Yet, Starbucks lost in district court on all of its claims.  Starbucks I, 2005 U.S. DIST LEXIS 35578 at 29.  Starbucks appealed, the second circuit reversed in 2007 because of a change to the Lanham Act in 2006 by Congress through the Federal Trademark Dilution Act, and the trial court affirmed its prior decision in favor of the defendant in 2008.  Starbucks v. Wolfe’s Borough Coffee, Inc., 477 F.3d 765 (2nd Cir. 2007) (Starbucks II); 15 U.S.C. §§ 1125(c), 1127 (2008); Starbucks III.

Starbucks Claims
Starbucks sued Wolfe’s under federal and state law, alleging trademark infringement under sections 1114 and 1125(a) of the Lanham Act, trademark dilution under sections 1125(c) and 1127 of the Lanham Act and also under New York law, and unfair competition under state common law.  15 U.S.C. §§ 1114(1), 1125(a) (2008); Id. at §§ 1125(c), 1127; N.Y. Gen. Bus. Law § 360-1 (1999).  This case note will focus on the allegation of trademark dilution.

In order to prove trademark dilution, the plaintiff must demonstrate that (a) the plaintiff’s mark is famous, (b) the defendant is using commercial use of the famous mark, (c) the defendant’s use came after the plaintiff’s use, and (d) the defendant’s use of the plaintiff’s mark dilutes the plaintiff’s mark.  Starbucks I, 2005 U.S. DIST LEXIS 35578 at 22.  The defendant had conceded the first three elements leaving only the last element of the rule in dispute.  Id.

Moseley v. Victoria’s Secret Catalogue, Inc., 537 U.S. 418, 433 (2003) requires a plaintiff to prove actual dilution rather than a likelihood of dilution in order to prevail under the Lanham Act anti-dilution section.  New York law is less stringent than federal law in this area, and the court reasoned that if the plaintiff could not prevail under state law, it also could not prevail under federal law.  Starbucks I, 2005 U.S. DIST LEXIS 35578 at 25.  The court examined the likelihood that the defendant’s use of its marks would either blur or tarnish the plaintiff’s marks, and concludes that plaintiff could not prevail under either standard.  Id. at 30.  Blurring occurs when a defendant uses the plaintiff’s mark to identify defendant’s products, increasing the possibility that the plaintiff’s mark will no longer uniquely identify plaintiff’s products.  Id. at 25.  Tarnishment occurs when a plaintiff’s mark is associated with products of a shoddy or unwholesome character.  Id. at 26.

The court’s review of the record caused it to conclude that the plaintiff had failed to demonstrate actual or likely dimunition “of the capacity of the Starbucks Marks to serve as unique identifiers of Starbucks’ products…” because the plaintiff’s survey results did not show an association with the defendant and the mark “Charbucks,” only that respondents associated the term “Charbucks” with “Starbucks.”  Id. at 27.  The court also held that the plaintiff’s survey results did not substantiate that the mark “Charbucks” would reflect negatively on the Starbucks brand.  Id.  Plaintiff therefore lost on its dilution claims.

Change in Dilution Act

Prior to 2006, dilution of a famous mark required that the plaintiff demonstrate actual dilution to prevail under section 1125(c) of the Lanham Act.  Moseley, 537 U.S. at 433.  However, Congress amended the applicable statute to only require that the defendant’s use was “likely to cause dilution.”  Starbucks II, 477 F.3d at 766.  The second circuit held it was not clear if the amended Lanham Act’s prohibition of dilution of famous marks was coextensive with New York law, the latter being the basis for the trial court not finding dilution of Starbucks’ marks.  Id.  Therefore, the appeals court vacated the trial court’s judgment and remanded for further proceedings.  Id.

On Remand

The district court took back up the Starbucks case under the amended anti-dilution statute.  To demonstrate blurring of a famous mark, the amended Lanham act requires a court to consider all relevant factors including: “(i) the degree of similarity between the mark or trade name and the famous mark; (ii) the degree of inherent or acquired distinctiveness of the famous mark; (iii) the extent to which the owner of the famous mark is engaging in substantially exclusive use of the mark; (iv) the degree of recognition of the famous mark; (v) whether the use of the mark or trade name intended to create an association with the famous mark; and (vi) any actual association between the mark or trade name and the famous mark.”  Starbucks III, 559 F. Supp. at 476 (citing 15 U.S.C. § 1125(c)).

Degree of Similarity

The district court held that a plaintiff must demonstrate under this element that the marks are very or substantially similar.  The court pointed out that the defendant’s marks appear on packaging that is very different from the plaintiff, and the defendant used the rhyming term “Charbucks” with “Mister,” where Starbucks appears alone when used by the plaintiff, therefore the court found this factor to weigh against the plaintiff.  Id. at 477.

Distinctiveness of Starbucks Mark

Given the extent of the use of the Starbucks mark by plaintiff and the amount of money expended by the plaintiff in its marketing program, the court found this factor favored the plaintiff.  Id.

Exclusive Use by Starbucks
The fact that the plaintiff polices its registered marks, and the amount of money spent on using the mark both led the court to weight this factor in favor of the plaintiff.  Id.

Degree of Recognition of Starbucks’ Mark
Again, given the longevity and number of customers that visit Starbucks stores, the court found this factor to favor the plaintiff.  Id.

Defendant’s Intent to Associate with Starbucks’ Mark

The court finds that while the defendant intended to allude to the dark roasted quality of Starbucks brand coffees, the fact that the marks are different and the defendant had not acted in bad faith led the court to weigh this factor in favor of the defendant.  Id. at 478.  The court reasoned that the defendant used this mark to distinguish its own lines of coffee products, with the Mr. Charbucks brand being the dark roasted coffee as compared to other Wolfe’s Borough/Black Bear coffees.  Id.

Actual Association with Starbucks’ Mark

Here, the court found that while there was an association with the Starbucks’ mark to some respondents to the survey conducted by Starbucks, this association alone is not enough to find dilution.  Id.  Instead, the court found that the defendant’s marks would not cause customers to confuse the defendant’s products with the plaintiff’s.  Rather, customers would tend to see the playful reference to a quality of Starbucks’ coffee – the dark roast – to distinguish one kind of Wolfe’s Borough brand coffees from other Wolfe’s Borough brand coffees.  Id.

Tarnishment Analysis

The amended Lanham Act also provides a specific definition for dilution by tarnishment: “an association arising from the similarity between a mark or trade name and a famous mark that harms the reputation of the famous mark.”  15 U.S.C. § 1125(c)(2)(C) (2008).  The court held that the plaintiff’s survey evidence could not support a finding of dilution by tarnishment, because the plaintiff’s survey was susceptible to multiple and equally likely interpretations.  Starbucks III, 559 F. Supp. at 480.  In addition, the court found that the defendant’s coffee products were not of actual poor quality, so any actual association between the defendant’s coffees and Starbucks would not likely be damaging to Starbucks.  Id.

As a result, Starbucks lost its case on remand for trademark dilution.  One might almost say that Starbucks has become so synonymous with quality dark roasted coffees that their brand name can’t be diluted by other quality coffee brands.  Instead, the Starbucks mark is a victim of its own success in the world.  Add that to the list of reasons why a Starbucks on every street corner is not a good idea.

Security Standards: Massachusetts and HIPAA

In 2009, Massachusetts become the first state to mandate that those storing personal information of residents of Massachusetts comply with specific security practices as required  under 201 CMR § 17.00.  These standards went into effect on January 1, 2010.  The following is an analysis of how the Massachusetts legislation lines up with the existing HIPAA security standards that are described in detail in 45 CFR § 164 as promulgated in 2003 and effective in 2005.

Scope
Section 17.01(2) applies the Massachusetts regulations to any persons that “own, license, store, or maintain personal information about a resident of the Commonwealth.”  201 C.M.R. § 17.01(2).

The HIPAA security regulations apply to “covered entities,” which are health plans, clearinghouses, and health care providers that transmit health information in electronic form.  45 C.F.R. § 164.104.

The HIPAA security regulations are national in scope, but limited to health care entities, where the Massachusetts regulations apply to any entity that may store personal information on a resident of Massachusetts.

Section 17.02 defines “personal information” as a Massachusetts resident’s first and last name, or first initial and last name, in combination with a social security number, driver’s license number, or financial account number.  201 C.M.R. § 17.02.

The HIPAA security regulations are applicable to “protected health information,” which is defined as “individually identifiable health information.”  This definition has been interpreted to include a patient’s name, social security number, date of birth, and other patient identifiers, along with clinical diagnostic information or other data that might be stored in a health care provider’s records related to patient care.  45 C.F.R. § 160.103.

The information to be protected by the two regulatory schemes is overlapping but distinguishable; the Massachusetts regulations are aimed at protecting financial information like credit card account numbers, where HIPAA is aimed at protecting health information.  However, an health care provider that provides services to Massachusetts residents would be obligated to comply with both regulatory programs.

Designee to Maintain Security Program
Section 17.03(3)(1) requires that an employee be designated to maintain the security program of the organization.  201 C.M.R. § 17.03(3)(1).

The HIPAA security regulations require that a person be designated who is responsible for developing organizational policies to support compliance.  45 C.F.R. § 164.308(a)(2).

Risk Assessment
Section 17.03(3)(2) requires a risk assessment of security risks to both paper and electronic systems containing personal information. 201 C.M.R. § 17.03(3)(2).

The HIPAA security regulations require that a risk analysis and risk management process be implemented at the covered entity.  45 C.F.R. § 164.308(a)(1)(ii).

Policy on Information Transport Off Business Premises
Section 17.03(3)(3) requires the development of an organizational policy on the transport of personal information off business premises. 201 C.M.R. § 17.03(3)(3).

There is no specific provision under the HIPAA security regulations that would require a specific policy on transporting protected health information.

Disciplinary Policy

Section 17.03(3)(4) requires the imposition of a disciplinary policy for violations of the security program. 201 C.M.R. § 17.03(3)(4).

The HIPAA security regulations require that a sanction policy be developed for violations of the security policies of the covered entity.  45 C.F.R. § 164.308(a)(1)(ii)(C).

Terminated Staff

Section 17.03(3)(5) requires that the security access of terminated staff be immediately terminated through a deactivation of the user’s account. 201 C.M.R. § 17.03(3)(5).

The HIPAA security regulations require that a procedure be implemented to terminate access for separated staff, but the regulation does not require “immediate” termination of access.  45 C.F.R. § 164.308(3)(ii)(C).

Third Party Service Providers
Section 17.03(3)(6) requires that entity’s that have personal information and relationships with third parties take measures to ensure third party compliance with the security regulations.  201 C.M.R. § 17.03(3)(6).

The HIPAA security regulations require that covered entities enter into business associate contracts with third parties that may have access to electronic protected health information of the covered entity.  See 45 C.F.R. § 160.103; 45 C.F.R. § 164.314(a).

The American Recovery and Reinvestment Act of 2009 (ARRA) went further with regards to business associates; section 13401 requires that business associates specifically comply with the HIPAA security regulations found in 164.308, 164.310 and 164.312.  ARRA § 13401.

Limiting Data Sets

Section 17.03(3)(7) requires that the minimum data set be collected by an entity that collects personal information.  201 C.M.R. § 17.03(3)(7).

The HIPAA security regulations do not specifically address this requirement.

System Identification
Section 17.03(3)(8) requires that an entity identify what records or systems contain personal information, so that these records or systems can be handled in compliance with the security policies of the organization.  201 C.M.R. § 17.03(3)(8).

The HIPAA security regulations do not specifically address, but such a system by system identification would likely occur within the risk analysis conducted by the covered entity under section 164.308(a)(ii)(A).  45 C.F.R. § 164.308(a)(ii)(A).

Physical Access

Section 17.03(3)(9) requires reasonable restrictions on physical access to paper records to prevent unauthorized disclosure of personal information. 201 C.M.R. § 17.03(3)(9).

The HIPAA security regulations do address physical access to the covered entity’s facilities, but do not address how paper records should be secured.  See 45 C.F.R. § 164.310.

Monitoring

Section 17.03(3)(10) requires monitoring of the security program to ensure effectiveness. 201 C.M.R. § 17.03(3)(10).

The HIPAA security regulations require regular monitoring of the security program to ensure that protected health information remains secure.  45 C.F.R. §§ 164.306(e), 164.316.

Review

Section 17.03(3)(11) requires the at least annual review of the security program. 201 C.M.R. § 17.03(3)(11).  The Massachusetts rules also contemplate review of the security program when an entity substantially materially changes its business practices.

The HIPAA security regulations do not specify a minimum review period for the security programs of covered entities, however, the typical practice for risk analysis and review is to conduct such a review on an annual basis.  See 45 C.F.R. § 164.308(a)(ii)(A).

Documentation and Incident Reporting
Section 17.03(3)(12) requires the documentation of an entity’s response to security incidents. 201 C.M.R. § 17.03(3)(12).

The HIPAA security regulations do require a covered entity to implement a policy for reporting and responding to security incidents, and the regulations provide for a requirement that activities taken under the security program be documented.  45 C.F.R. §§ 164.308(a)(6), 164.316.

Secure User Authentication
Section 17.04(1) requires a detailed secure user authentication process that controls user logins, passwords, restricting access to only active users, and locking accounts after a number of unsuccessful login attempts.  201 C.M.R. § 17.04(1).

The HIPAA security regulations address the issue of user authentication more generally by requiring that a policy be developed to grant access to users based on prior authorization.  See 45 C.F.R. § 164.308(a)(4).  In addition, the regulations require a policy on managing passwords, but are not specific on how the details of how passwords are to be managed or created.  45 C.F.R. § 164.308(a)(5)(ii)(D).

Access Control
Section 17.04(2) requires a detailed access control process that restricts access to personal information and requires unique usernames and password combinations assigned to each user with access to personal information.  201 C.M.R. § 17.04(2).

The HIPAA security regulations require unique user identification under section 164.312(a)(2)(i).

Encryption
Section 17.04(3) requires the encryption of all personal information that is transmitted over a wireless or public network.  201 C.M.R. § 17.04(3).

Section 17.04(5) specifically requires that personal information on laptops or other portable devices be encrypted.  201 C.M.R. § 17.04(5).

The technical safeguards of the HIPAA security regulations address generally the need to encrypt electronic protected health information, but do not address specifically when this information must be encrypted.  45 C.F.R. § 164.312(a)(2)(iv).  The transmission security section only requires that security measures be implemented to “guard against unauthorized access to electronic protected health information that is being transmitted over an electronic communications network.”  45 C.F.R. § 164.312(e).  Wireless, however, is not specifically addressed in the HIPAA security regulations, as this technology was still nascent when the original regulations were written in the late 1990’s.

The HIPAA security regulations do not specifically require that the contents of laptops or other portable devices  be encrypted.

Monitoring
Section 17.04(4) requires monitoring of unauthorized access of systems.  201 C.M.R. § 17.04(4).

The HIPAA security regulations also require the recording and examination of activity in information systems.  45 C.F.R. §§ 164.312(b), 164.308(5)(ii)(C).

Systems Connected to the Internet
Section 17.04(6) requires a firewall and up-to-date operating system patches for any system connected to the internet that contains personal information.  201 C.M.R. § 17.04(6).

The HIPAA security regulations do not address these specifics, though most security experts would agree that a firewall is a minimum security feature for controlling unauthorized access to protected systems from the internet.  The issue of operating system patches is not addressed either, but, at least for Windows systems, the patching of security threats is also now a minimum feature of any organizational network.  Other operating systems and applications also regularly release patches that ought to be applied, but most of the game is in securing your Windows systems.

Anti-virus Software
Section 17.04(7) requires up-to-date anti-virus software be in use.  201 C.M.R. § 17.04(7).

The HIPAA security regulations also require some kind of protection from malicious software.  45 C.F.R. § 164.308(5)(ii)(B).

Education
Section 17.04(8) requires education and training on best security practices for all personnel that use information systems.  201 C.M.R. § 17.04(8).

The HIPAA security regulations require that covered entities provide security awareness training for all staff in the organization, and require “periodic security updates.”  45 C.F.R. § 164.308(a)(5).

Summary
Much of the Massachusetts requirements for personal information correspond to the protections mandated under the HIPAA security regulations, however, there are some specific threats which have occurred more recently that the Massachusetts regulations respond to, particularly laptop and portable device security, and the specific and ongoing threat to Windows-based computer systems.  Need help managing your technical security?  Give us a call for help.

HIPAA, Meaningful Use and Risk Assessments

The Health Insurance Portability and Accountability Act (HIPAA) granted the Secretary of Health and Human Services the power to establish regulations for covered entities, including the information security policies of the entity.  An important aspect of the security regulations is regularly assessing risks to the entity’s information systems and infrastructure under section 164.308(a)(ii)(1) of the security regulations.  For those of you attempting to qualify for meaningful use incentives, risk assessments are a part of the core 15 metrics, making documented risk assessments mandatory.

The regulation specifically requires a covered entity to “conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity.”  Id.  This analytical process is helpful to the organization for several reasons.  First, doing an inventory of the information systems in use in the organization helps to categorize the extent of exposure of the organization to security threats.  Second, spending time on identifying known problems or vulnerabilities helps to clarify what should be budgeted for mitigating these problems.  Third, all risk assessment methodologies require an organization to balance the potential impact of the risk against available mitigations, and to choose a reasonable mitigation (one which costs less than the adjusted risk to the organization of loss).

However, the contents of a risk analysis are not defined with the security regulations, and such an analysis is not self-defining. There are a wide variety of analytical tools available today to help a provider assess risk to his business organization.  For example, the Centers for Medicaid and Medicare (CMS) created a risk assessment document that aids a provider in categorizing existing information systems, evaluating what risks exist to those systems, what mitigations are in place to reduce risk, and what risks remain that are sufficiently great that either additional mitigations are required or the business owner must accept them in order to continue to operate the system.   See Centers for Medicare & Medicaid Services (CMS) Information Security Business Risk Assessment Methodology, version 2.1 (May 11, 2005).

The CMS methodology also provides a guide to evaluating a specific risk by estimating the likelihood of the risk’s occurrence, and if left unmitigated, what impact the risk would have on business operations.  Those risks of a certain risk level or higher are those that require mitigation, helping an organization to prioritize which identified mitigations should be implemented first.  Id.  For example, risks to a payroll system, while critical to an organization’s efforts to operate, may not be critical as a health record system because the organization may only provide paychecks every other week, whereas the organization’s staff access the health record system for each patient visit on a daily or hourly basis.  Alternatively, the data in the payroll system may be at less risk overall (either because the system has fewer vulnerabilities or has less overall data) as compared to an electronic health record system.  Following this qualitative methodology helps organizations to reason through their relative risks and identify potential mitigations.

An alternative methodology follows a similar process, but instead utilizes estimates of the value of systems to the business, the likelihood that the risk will be realized, and a calculated value to the organization of potential mitigations of that risk for a particular period (for example, one year).  See Shon Harris, All-In-One CISSP, 73 (3rd ed. 2005) McGraw Hill/Osborne.  For example, if a the provider operates an electronic health record system, and the value of that system to the organization is 500,000, the provider could then identify various risks that exist to the data in that system and their relative likelihood of occurring, thus calculating the maximum value of an effective mitigation of that risk.  For example, if the risk identified is a computer virus, the analyst would take into consideration how many computer viruses are written for the system platform, what kind of damage a typical virus could do to the system, the history of virus infection of the systems in place in the organization, and other factors that impact virus infection.  In addition, the analyst would examine what efforts would be required to restore the infected information system to normal operations, and what data could be lost as a result to calculate the percentage of the system’s base value that would be affected by the risk.  Multiplying the base value of the system by the likelihood of the threat’s realization and by the scope of the risk’s impact on the base value gives an annualized risk value.  “Reasonable” mitigations of this risk should therefore cost less than this annualized risk value.

This quantitative approach is helpful for estimating risk and valuing mitigations, especially where the covered entity can identify the costs of mitigations (such as an anti-virus solution or disaster recovery system).  Something very unlikely to happen should usually not be mitigated with a very expensive solution.  Need help performing a risk assessment?  Give us a call for assistance.

HIPAA Security Regulations: Preparing for Disasters

The Health Insurance Portability and Accountability Act (HIPAA) 42 U.S.C. § 1320(d-1)(b) empowered the Secretary of Health and Human Services to promulgate standards for health care providers that “shall be consistent with the objective of reducing the administrative costs of providing and paying for health care.” Under Section 1320(d-2)(d), Congress empowered the Secretary to establish regulations for the security of health information. Under that grant of power, the Secretary promulgated rules on technical security in 45 C.F.R. section 164. Section 164.308(a)(7)(ii)(B), under the heading “disaster recovery plan,” requires that a covered entity establish “procedures to restore any loss of data.” The next section requires that the covered entity be able to operate its critical business processes during an emergency while still protecting the security of its protected health information.

Recovering from a disaster is a relatively complicated matter for most health care providers that have implemented a sufficiently robust system or network. This is because many health care providers regularly use a system for email, a system for managing appointments and billing, a different system for accounting for expenses and payroll, another system for managing health records, and other supporting systems (such as a Windows domain controller, file and print sharing, and other administrative servers). In some cases, rather than hosting these servers in-house, some providers have contracted with outside vendors that provide hosting services, or application service provider (ASP)-models for specific applications used by the provider.

The complexity of a recovery is increased by the nature and number of disasters that could occur to a provider’s information systems. For example, disasters include uncontrollable events such as hurricanes or floods, fires, and other natural disasters, but could also include technical failures such as a widespread and uncontrolled computer virus, or concerted terrorist activities that disrupt electrical power or harm human life.Each risk presents a different set of issues for a health care provider.

The other part of the recovery equation is what the organization’s expectation is for the time to recover from a system failure, and the relative tolerance of the organization for an outage. Some health care providers that operate all the time (like an emergency room) may not be able to tolerate long outages in order to provide care effectively for patients. Smaller providers may not be able to tolerate a long outage because of the financial impact on their practice. However, some systems may be less critical for recovery because their loss will not immediately stop a practice from seeing patients. Sorting out the complexity of recovery requires planning before a disaster hits.

Risk Analysis

Section 164.308(a)(1)(ii)(A)-(B) of the regulations require that all covered entities engage in regular risk analysis and risk planning and take action on this analysis to reduce risks to electronic protected health information. This analytical process is helpful in guiding the development of a disaster recovery plan because the risk analysis identifies all systems and data in use in the business organization, and helps to guide the organization to those systems that are sufficiently critical to require disaster recovery. Of those critical systems, some may require a more complicated or expensive disaster recovery plan in order to make them available more quickly in a disaster. For example, a payroll system, while critical to an organization’s efforts to operate, may not need to be available as quickly as a health record system because the organization may only provide paychecks every other week, whereas users access the health record system for each patient visit on a daily or hourly basis.

Developing A Recovery Plan

Once the organization has assessed its risks, the organization can develop a plan to manage those risks, which includes setting expectations for recovery time, describing particular disaster scenarios and the organization’s response to those scenarios, and identifying the resources needed to effectively recover from particular disasters. The organization’s plan can also identify when an organization will re-open for business following a disaster, whether or not the provider anticipates having to operate during a disaster (such as an emergency room staying open during a bioterrorism event), what the chain of command will be to respond to a disaster, and a communication plan for letting customers and staff know the status of the organization during and after an event.

The systems team can use this high level plan to examine in more detail the individual systems in play in the organization, and can evaluate what methods for recovery can be cost-effectively implemented to meet organizational requirements. For example, a weekly digital tape backup may be adequate for less critical systems that change little from week to week, whereas a real-time replication system may be required for highly critical data.

Testing and Making Plan Changes

Once a systems recovery plan has been developed, testing the plan is absolutely essential to ensure that the plan on paper will actually work in a real disaster. Practice is important for several reasons. First, systems staff that participate in testing will have a better idea of what to expect if a system failure occurs and will be more capable of responding to it. Second, testing helps to identify missing steps or pieces of the plan which can be addressed by the systems department before the next round of testing. Third, testing helps to identify how much training will be required for systems staff to be able to effectively respond to a failure. And finally, involving the end users in the testing cycles helps to set expectations appropriately should an actual disaster occur.

Keeping Up With System Changes

Doing a risk analysis, creating a recovery plan and testing the plan is most of the preparation for recovering from a disaster, but the systems that belong to the plan will change with time. All systems have patches and upgrades that need to be applied each year. In addition, organizations routinely add new systems, which may present new challenges for recovery planning. And, of course, personnel trained to perform recoveries will change with time, which will require that new staff that join the team be trained on system recovery. Do not underestimate the time and effort required to include additional systems in the testing and recovery plan.

Mitigating Disasters

Mitigations, like anti-virus software, automatic patching and control systems, firewalls and other border control devices, organizational policies on system account controls, robust permissions, and other mitigations are also absolutely essential to any disaster recovery process. As any systems veteran will tell you, avoiding a systems recovery is a systems department’s top priority. But mitigations are not an excuse for not having a disaster recovery plan, because no mitigation will be one hundred percent effective at preventing all system failures. Being prepared is always the best policy.

Need help getting your systems department prepared for disasters? Give us a call or send us an email.