Trademark Research and Trademark Clearance – A Primer

We previously discussed trademarks in general and how they are used by businesses to distinguish their brand of product or service.  In this post, we will discuss the trademark clearance process.

Trademark Clearance

Trademark clearance is the process of evaluating potential brand names against existing brand names or designs that are in use in commerce.  Trademark clearance is important for two main reasons.  First, it is of no use to a new brand to overlap with the brand name of another business that is already being used in commerce.  This will tend to lead to consumer confusion about who makes your product.  Second, if another business is already using a particular mark, you run the risk of nasty letters from that business’ attorney, and potentially a lawsuit for infringement and/or dilution of that mark.  If you have already committed substantial money to develop and market a particular brand name, then discover that your mark is already in use by another in the same market area, not only will you lose the money invested in your marketing, but you could be sued.

You should always talk with an attorney licensed in your state before making decisions about your brand or design mark.

Market Research

Generally, a person starting out in business should conduct some research on similar or competing businesses that are already established in the market place.  For example, if you wanted to start a new information technology company that virtualizes physical servers, you would want to find out if there are other businesses with that kind of technology.  VMWare and Microsoft are major players in this market.  You would then want to take a look at the brand names that these companies use to distinguish their software for virtualization.  For example, Microsoft uses a brand name, “Hyper-V” or “Hyper-V Server” for this product offering.  VMWare uses a number of individual brand names for its suite of products.  You will note, however, that each one of these names begins with a lower case “v.”  This business also uses “VMWare” itself as a brand name when you peruse their marketing materials.

From this preliminary research, you would likely rule out a product name that included “Hyper-V” or “VM” or “VMWare” in your name.  You will also note that a lot of the literature on virtualization uses the marketing concept of “cloud” or “cloud computing.”  It is possible that there are companies that develop virtualization software that include the word “cloud” or “cloud computing” in their brand name.  For example, a google search for “cloud computing” turns up a paid ad for Oracle, HP, and a link to IBM’s web site.  So, “cloud” may also not make sense to be a part of your software’s brand name or identity.  You might also rule out starting out your product branding with the lower case letter “v,” as VMWare may enjoy “family of marks” protection.

Knowing what’s in use in the market can help you start thinking about how to describe your product, and how you want to distinguish your software from the existing companies that make this kind of software.  Understanding the words that are commonly used by customers or businesses that offer similar services to your new business will help you to get into the mindset for branding your company’s product or service.

Potential Brand Names

From here, you would want to work on developing a list of potential brand names for your product.  As you may be aware, the law recognizes varying degrees of protection for marks on a sliding scale.  Brands or marks that are merely descriptive of a product or service generally cannot be registered, except under specific circumstances (that the brand has been used in commerce for long enough to develop a secondary meaning).  Also, marks that are generic, which is, that tend to represent a category of goods (think “Thermos” which about 100 years ago was a trademark that became so effective that everyone called their hot drink carrier a thermos) cannot be registered.  These two groupings of marks will generally condemn the mark to little or no protection should another start using that mark with his or her goods.  (For a general discussion of trademark protection, take a look at the case of Abercrombie & Fitch v. Hunting World, Inc., 537 F.2d 4 (1976)).

However, a mark that is “suggestive” or is “arbitrary and fanciful,” which is a fancy way of saying that the mark is distinctive, will receive more protection from infringers.  For example, “Coke” or “Coca-Cola” are trademarks for a very well known brand of soft drink.  The word “coke” literally means a fuel that is derived from coal.  I doubt that one would say that this word would, in a literal way, have much of anything to do with a carbonated soft drink, but you could see why this might be suggestive – the caffeine in this soft drink powers many a late night programmer (along with pizza) to hack out some code for a morning deadline!


Coming up with a list of potential names is a challenge for many businesses.  However, after you get the creative juices flowing and have a list, the next step is to work with a Trademark attorney to review your list to help narrow the field.  An attorney can help you to identify “generic” or merely descriptive proposed marks that are unlikely to be accepted for registration.  In addition, an attorney can perform a preliminary search to see if there are existing marks already registered that are the same or very similar to a proposed mark.  These steps will reduce your list of potential marks.

After this hurdle, you can identify what potential marks you want to pursue.  If appropriate, an attorney can order a more comprehensive search from a trademark search business to identify, more broadly, those marks already in use in commerce that may overlap with the proposed mark.  The attorney can then help you understand your chances at a successful registration of a proposed mark.

Note: Marks mentioned in this article are the property of their respective owners.  Use of these marks is not meant to imply endorsement of this article.

Trademarks 101

What are trademarks?

Trademarks are a word or words, an image, or other similar marking that identify the source of a product or service.  Historically, trademarks were used in various aspects of commerce, including marking goods that were shipped so that, in the event of a shipwreck, the owner of the goods could claim them instead of the goods escheating to the crown.  In the age of guilds, individual craftsmen would have a mark they would apply to goods they individually made so that the guild would be able to trace a product back to its individual maker in the event that the good did not meet the standards of the guild.  Trademarks have even been found on ancient goods in the Roman empire.[1]  Marks have been around for a long time and serve an important purpose.

Today, there are numerous trademarks that are almost universally recognized: Coke, Pepsi, McDonald’s, Cisco, IBM, HP, Ford, Facebook.[2]  All of these words signify a particular maker of a product or service – Coke and Pepsi represent their respective soft drink products; McDonald’s represents a certain brand of fast food (as distinguishable from Checker’s, Red Robin, Wendy’s, Burger King, and many others); Cisco, IBM, and HP are all respective computer and software makers; Ford for cars; Facebook for a web site that introduced the world to social media.

Why should I register a mark?

Intellectual Property, of which trademarks are one kind, presents specific challenges for those that want to own and protect it from use by others.  Unlike physical property, which you can touch (and potentially can protect by a lock, fence, or gate), intellectual property is intangible.  As a result, securing IP requires a different action to protect it from infringement or dilution by others.  Prosecuting and securing a registration is an important way to mark out the boundaries of your intellectual property.  The research required prior to registration of a mark helps a prospective trademark owner determine if a proposed mark is already in use, and if so, for what product or service.

This research is important to help a prospective trademark owner from using a mark that infringes on someone else’s intellectual property, thereby avoiding unnecessary litigation.  In addition, a “cleared” mark is more likely to be distinctive as a brand for its associated product or service, which of course is the whole point of having a brand name in the first place – to distinguish your product in the market place.

Registration of a trademark that is in use in commerce also helps a mark owner to protect that mark from use by others without authorization, as the registration itself represents constructive notice to a would be mark user to not use the mark.[3]  In addition, a mark registration helps simplify a trademark owner’s infringement law suit against unlicensed users, as a registered mark carries with it the presumption of validity as to the mark (and five years after registration, the registration itself becomes conclusive evidence of ownership as to the registered mark).[4]  Moreover, registration provides an owner with more remedies than an unregistered mark owner under federal law.[5]

So while registration is not mandatory, there are strong incentives for a trademark owner to register his mark, particularly if you plan to be in business for the longer term with a particular product or service.

In addition, a substantial portion of the value of businesses today comes from a business’ intellectual property, including the brand names used to distinguish its products and services in the market.  In fact, as we move further into an “information economy,” I would conservatively estimate that a majority of a business’ value comes from its intellectual property.  Identifying and protecting a brand name is a key step in the business planning process for any business.  Also, because of the widespread adoption and use of the internet globally, protecting one’s brand name from infringement is more important than it ever has been for business.

[1] See Francis, Collins, “Patent Law,” 5th Edition at page 983 and footnote a that provides further reading material for the history of trademarks.

[2] Marks referenced above are the property of their respective owners.  None of the mark owners are affiliated or suggested to endorse the statements of the author of this article.

[3] See 15 U.S.C. § 1072.

[4] See 15 U.S.C. §§ 1065, 1115(b).

[5] See, for example, 15 U.S.C. § 1111.

Cloud Computing Primer for Attorneys

The following is my presentation file from the annual Maryland State Bar Association meeting.  I was a panelist on the topic of Cloud Computing: Fact or Fiction on June 15, 2012.  My presentation discussed some of the basic issues about cloud computing, such as what it is, the cost savings that may be possible by moving to the cloud, some of the security issues with computing, and some of the ethics issues that practicing attorneys face when making decisions about computing systems.

If you have any questions about this presentation, please feel free to email or call me to discuss them.  Thanks.

Cloud Computing Fact or Fiction

Meaningful Use Overview for Maryland MGMA

On March 13, 2012, I presented to the membership of the Maryland MGMA on the topic of “Meaningful Use,” in light of the recent publication of the Stage 2 interim regulations by CMS.  Below, please find a link to the presentation file.

Meaningful Use Overview

Members had questions related to meaningful use, which I will make an effort to respond to under separate cover.

Don’t Be Fooled (Domain Name Registration)

One of my clients forwarded to me an email he received regarding the renewal of his domain name.  The email had the appearance of an invoice for the renewal.  The problem?  The invoice was not from my client’s domain name registrar, but from a vendor that wants my client to transfer his domain away from his existing registrar.

How Does This Work?

If you have a web site, your web site has a registered domain name.  That name (ending with a .com, .net, or another .something) has to be registered with an authorized domain name registrar, like Network Solutions or GoDaddy.  There is an international body, ICANN, that is responsible for approving registrars for the “top level domain names.”  ICANN acts as a coordinator to make sure that a particular domain name is controlled by one responsible registrar, who is the host for translating the domain name into an IP address, which your computer needs to find each internet site that you are trying to reach.  Without such a coordination, the internet would likely stop functioning in that you would be unable to consistently find a web site when you went to visit it.

Underneath the covers, each time you go to visit a web site, your computer asks what the IP (internet protocol) address the domain name you’ve asked for translate to.  For example, my domain,, has an IP address of  My computer finds this IP address by asking a domain name server close to it (usually on the same local area network as my computer).  This local domain name server, in turn, asks itself whether it is an “authoritative” server for the domain name, and if not, asks a domain name server above it who is the authoritative server to tell it what the IP address for this domain name is.  Most DNS servers have a list programmed into them of “root hint” or upstream servers to ask when the local server does not know.  Ultimately, (and usually within a few seconds, which is kind of incredible, given that there are billions of computers on the worldwide internet), the local domain name server finds the address and tells my computer,  My computer, in turn, uses this information to point my web browser to where I was trying to go.

This architecture only works if there is one authoritative domain name server out on the internet.  If there were many authoritative servers, each might have a different IP address for the same name, which would mean my question of where to go might be answered differently each time I asked it.  Talk about mass confusion.  So, if you own a domain, you registered it with a registrar.  You pay a fee to have a registration.  Usually you need to pay this fee annually.

The Problem

The problem is that for many business owners, the registration is handled by a web developer, or was done years ago (because you can purchase a web site registration for several years at a time).  It is easy, then, to forget about who you registered with when it comes time to renew your domain name.  And then, it is even easier to be fooled into sending your credit card information to “Domain Services” (the originator of the spam that spurred this posting).  One way to solve this is to setup your domain names to automatically renew with your current registrar.  You can also determine who is your current registrar by performing a “WhoIs” query on your domain name.  You can use this information to determine when your domain name is due to renew.

Be careful – the internet is a wild place.  This is but one way to get into trouble!

The Struggle Over Privacy Online

More and more data is being collected and stored in more and more data centers all over the world as the use and functionality of the internet expands.  Sites like Facebook now have in excess of 800 million users, half of which are active in any particular day.  An almost countless amount of information and data is shared with the public internet on a daily and hourly basis.  In addition, many businesses are using cloud-based services (like Google’s gmail or Google Apps,, Amazon marketplace, and a host of other solutions) to provide services and products to customers and manage their businesses.  As a result, we keep inventing names for the units of measure to calculate how much data is available throughout the world wide web (I mean, how many people do you know that use the term “exabyte” in conversation, really?).  The problem posed is what in the world all of this data is really being used for.

To answer that question is not simple.  A fair amount of what governs the protection, use and backup of data on the internet are private agreements between the service provider and the person or business who is putting data online.  When’s the last time you stopped and read one of those online “click-through” agreements?  I can’t say most are much fun to review (with an exception for the Sharebuilder user agreement, which took smoke breaks periodically and made entertaining chatter in between paragraphs of heavy-duty legal writing).  Commonly, these agreements (for services designed for consumers) severely limit the site operator’s liability, disclaim any and all warranties regarding the service, and few offer that many protections for your data or your privacy.  (See, for example, Second Life’s Privacy Policy which provides some limitations on data provided to the service, but your ability as a user to control access to your information is relatively limited in comparison to what Second Life may do with information about you.  Google’s Privacy Policy is somewhat more limiting on what Google might do with your data, but you will notice that there is some variation in policies based on the specific product you might be using).

There are also governmental regulations that may govern your privacy.  Facebook recently entered into a consent order with the Federal Trade Commission because of allegations of privacy invasions by Facebook.  Presumably, other nations or international bodies may have jurisdiction over some of the larger companies that operate on the internet.  And, just like other international intellectual property rights may vary by country, privacy regulation also is likely to vary (with some nations like Germany with more data protections than others, for example).  Ultimately, our privacy interests in part have taken a back seat to having “free” applications available to us all the time.  Google’s original product, web search, has historically been free to use by anyone connected to the internet, but only because advertisers have been willing to pay for click-through advertising.  As google continues to dominate the web search market, so has it also benefited from the many advertisers that are able to cost-effectively run ads alongside the web search engine’s results.  These ads are effective because they usually attempt to match up what a user is searching for with a product or service that might be relevant to the keywords.

Facebook (and other social media technologies) have, as well, informed our cultural disinterest in privacy, by providing a forum to post all sorts of the mundane, outrageous, or controversial information and graphics, and quickly disseminate this information to “friends” or the general public.  However, there has not yet emerged a “facebook” for health data (though, perhaps, the rise of health information exchanges and online personal health records may result in such an application).  Lawyers and accountants don’t (at least not intentionally) publish their client’s secrets online.  Our government has in recent years labeled many more documents as secret (and therefore, not as easy to obtain) following 9/11.  There remain islands of privacy in the sea of unfettered information access that is the internet.  If you value your privacy, you may need to pay more to preserve it, or be more discerning in the products and services you contract to purchase.


Unauthorized Practice of Law & LegalZoom

LegalZoom is a national provider of online legal forms that markets to the general public.  You may have seen an advertisement with the famous attorney Robert Shapiro (a founder of the company) telling you that LegalZoom can help you form a company or write a will at a relatively low flat rate.  LegalZoom is controversial.  At least it is controversial for some bar associations in the United States who allege that LegalZoom is engaging in the unauthorized practice of law.

The unauthorized practice of law is where a person holds himself out to be licensed in a state to provide legal services.   Each state in the U.S. regulates the lawyers that practice within that state.  Therefore, each state has defined what constitutes the “practice of law.”  A class action suit was brought by citizens of Missouri against LegalZoom on the grounds that the document preparation that LegalZoom provided was a legal service, but LegalZoom itself is not an attorney admitted to practice in Missouri (here’s a blog post on with links to more about this case; here is also a stub on the ABA Journal).

There are at least two sides to this story.  The one side is that lawyers, trained in their state’s laws, are more likely to be competent in drafting a document that is legally sufficient in their state.  Furthermore, lawyers are susceptible to suit for malpractice, and are usually pretty easy to find to be served, and generally carry insurance.  An out of state web system that is not staffed by lawyers admitted to practice in a particular state are therefore less likely to competently draft legally sufficient documents, and also less susceptible to claims of malpractice (or breach of contract).  Therefore, preventing the unauthorized practice of law is an important service within a state to protect its citizens from untrained attorneys screwing up their legal issues, leaving them without recourse for their legal problem and without the means to sue the service provider.

Another side is that lawyers are expensive, and the unauthorized practice of law statutes are designed to reduce the supply of available attorneys, thereby artificially increasing the cost of legal services.  And, there are a lot of ordinary people in the world who cannot afford to pay an attorney $600 per hour to write a “simple will” or help them to file their incorporation papers for their new business.  There is, therefore, an under-served marketplace of clients that need an attorney’s help but can’t obtain assistance from an attorney in their state.

LegalZoom recently obtained around $100 million in venture capital, and may one day have an initial public offering.  More than a few people are betting that LegalZoom can get around the unauthorized practice of law, and that there is a substantial market for the services they are providing.  I have had at least one client recently tell me that they started a business using LegalZoom.  Would I have done a better job forming their LLC, just because I am a Maryland attorney?  I would probably say, no.  But I think customers miss out on interacting with an attorney and establishing a relationship with one.  Down the road a person that starts a new business may need legal help to review other issues, write contracts, help add a new owner or sell the business to another entity.  The business could end up being sued.  LegalZoom does not, and could not, provide litigation services, because that service would clearly be unauthorized practice of law unless they referred you to a Maryland attorney to handle the case.

Besides, the State Department of Assessments and Taxation provides many of the forms required to be filed in order to form a particular entity in Maryland.  Providing blank legal forms and general instructions is not the unauthorized practice of law, and this information is sufficient for some to properly get a business registered.  Our practice at Faith At Law takes a middle ground between blank legal documents and services like LegalZoom, and having a client go to a full-service law firm.  We offer legal document preparation services online that include limited legal consultation (with yours truly) provided to Maryland businesses and individuals by a Maryland-licensed attorney.  No, you won’t likely see ads for Faith At Law on television in California, but Marylanders can obtain flat rate legal services for certain documents from us.  And there are other attorneys providing similar kinds of limited legal services now in a number of states in the U.S.  My hope is that we can help meet a market need while also not leaving clients with a shabby legal service.  I’ll let you know when I’m ready for my IPO!

Health Information Exchange & Sharing Your Health Data

The ARRA (American Recovery and Reinvestment Act) provides incentives for qualifying health care providers that implement health IT systems in the next few years.  Among the requirements for receiving the incentive is that the provider can demonstrate the health IT system can “is connected in a manner that provides, in accordance with law and standards applicable to the exchange of information, for the electronic exchange of health information to improve the quality of health care.”  There is substantial incentive, therefore, for health providers to implement systems that can effectively exchange data with other systems through a Health Information Exchange (HIE).

ARRA is not the first statute to push the exchange of information in the health care market.  In fact, HIPAA, when it was originally implemented in 1996, provided authority for the Secretary of Health and Human Services to establish data exchange guidelines for claims and eligibility data with health insurers.  These standard formats, as defined by ANSI, pushed the health industry into an era of electronic data exchange with most health insurers.  Of course, what’s on a claim form to the insurance company is not the same as the kind and extent of the data that would be available in a health IT system like an electronic health record (EHR).  The clinical data sent to insurers – the patient’s diagnosis – is short hand in comparison to the significant amount of clinical information collected on a patient like lab results, patient histories, or reports from specialists.  And consistent storage of this information in EHR’s is in shorter supply in comparison to diagnosis data in their practice management system cousins.  Even the patient medication list, which is typically stored as structured data in most health records, may not necessarily be stored in a consistent format across EHRs.

HIE systems today have a substantial uphill battle ahead of them to be able to collect and meaningfully display data across a variety of information systems, so that consumers of this data will be able to use it in a meaningful way.  There is substantial pressure on the health market, however, to improve efficiency.  Today the U.S. health market struggles with effectively managing the care of patients, partly because of the amount of data available on patients, and the amount that is redundant but inconsistent.  For patients with significant health problems, a visit to a variety of medical professionals results in a fair number of disparate documents about the patient with a variety of sometimes conflicting information about the patient.  A patient taking 5 or 6 different prescriptions may forget one when asked by one specialist; different physicians may end up ordering redundant tests for the same patient; patients seeking narcotics may be able to play physicians off of each other.  HIE systems present a possible solution to the problem of securely sharing information between health care providers that serve the same patient.

Therefore, as incentives and pressures are placed on the market to improve efficiencies, I would anticipate that some of the technical issues with exchanging health information will be resolved.  That leaves a number of other areas to be more completely addressed, including patient privacy, the quality of data and the ability to trust the source of the data, and backup and redundancy.

The Privacy Problem

One of the great challenges for the HIE movement is maintaining patient privacy.  HIPAA was originally implemented in part to specifically address patient privacy, as have a number of other state laws on this topic (for example, the Maryland Medical Records Act, See Md. Health-Gen. Code Ann. § 4-301 et seq.).  And other states are getting in on the action to protect consumer privacy, including Massachusetts, Minnesota, and Nevada, just to name a few.

However, laws alone may not be enough to effectively regulate and protect the availability of health data.  In the present HIPAA enforcement regulations (which were modified by ARRA this year), the top fines (where the act in violation of the security regulations was a negligent rather than an intentional one) are relatively low compared to the potential size of an HIE (for example, if a company like google or Microsoft was to become a dominant HIE) because the fines are a flat rate per incident rather than being scaled according to the company’s gross revenue or the severity of the breach or finding.  The ARRA did move in the right direction this year by implementing a four-tiered approach to violations from the original enforcement authority under HIPAA, but further scaling may be required for this to become an effective deterrent to lax security practices.

Furthermore, having a patchwork of privacy laws increases the overall cost of compliance for HIEs, which increases the cost to implement these systems without necessarily improving the actual security of the information stored at the HIE.  This is caused by overlapping regulation along with the expense of responding to multiple authorities with the right to audit or investigate the HIE (as larger HIEs will undoubtedly operate across state lines).  Sadly, I imagine that this problem will probably get worse before it gets better, given the number of relatively autonomous sovereign powers within our country (5o states + the federal government) and the scope and scale of the privacy issue being considered.

I say that because of the amount of data that will likely become available within HIEs across the nation that will eventually be the health data for all 300 million of us.  Assuming that the typical patient’s chart is between 5 and 10 megabytes (with images and other pdf attachments that are not as small as document stored within a data table), the total data storage for all citizens would be between 1,500 and 3,000 terabytes – or about the total storage capacity of about 30,000 new Macbooks.  For comparison, in 2006, Google’s estimated storage of data for its entire operation was about 850 terabytes for storing information on about 24 billion web pages.  It is a lot of data, and a lot to manage.  In today’s fractured regulations, there will be substantial governmental interest in further regulating this data in the next few years.  However, without more consistent regulations, patient privacy may not be effectively protected.

Changing Attitudes Towards Privacy

Our future privacy policies may also be impacted by the attitude of our youth to privacy today.  Social networking sites, for example, allow for the exposure of a lot of information about the youngest among us, but the predominant users of these systems don’t seem to mind very much.  Now, of course, facebook is not designed for users to post their most recent blood sugar levels, so who knows whether college kids would treat that information in the same manner that they treat pictures snapped of them by the college paparazzi at the fraternity Friday night bash, but it stands to reason that the next generation’s attitudes towards privacy will be substantially different than the present one that has been called to govern the nation.

The result may be a reduction in the emphasis on privacy with an increasing criminal penalty for those that engage in theft of information.  For example, perhaps instead of worrying as much about whether health data is squirreled away in an underground bunker with Dick Cheney, the future leaders of the nation will make this data generally available via the internet, ultimately reducing its value to would-be thieves.  For myself, I can’t say it matters much if others know than I have high cholesterol and a family history of diabetes, but I also don’t think there is much stigma attached to either of these conditions as there might have once been (or might still be for other health issues).

Data Quality and Trusted Sources

HIEs will also need to address head on the quality and reliability of data stored in their databases.  Today, data systems do not generally go beyond the initial setup of some kind of private network and the file formats that are acceptable for data to be exchanged.  Inherently, one system trusts the data it receives from the other and merely re-publishes it into its own database, identifying the source of the data.  Usernames and passwords may just not be enough for everyone to know that the data being sent or received is accurate and reliable.

In addition, HIPAA (and some other laws) have placed a small emphasis on technical encryption, and the result is that little has been done with these technologies for most systems today to ensure that data entered is not repudiated later by the person that purportedly entered it.  For example, many commercially available database systems are not natively encrypted.  Local area network activity on the wire is rarely encrypted, as database systems rely on border security devices to keep outsiders out of LAN activity.  Passwords are not consistently complex across an enterprise (especially where multiple database systems maintain their own passwords and accounts), and certainly cannot reasonably be changed frequently enough to ensure the password has not been compromised (without the user community revolting against the IT staff).  And users routinely share passwords in spite of the numerous repeated messages from system administrators to not do so.

Furthermore, data exchanged between systems relies on the initial configuration of the networking that connects the two systems to remain uncompromised.  There is no further system verification to ensure that messages actually received across these systems are correct in the typical data exchange design.  TCP itself was designed with a checksum in each packet, but that only tells the receiver if the packet received matches what was intended to be sent by the source device, not whether the data sent is coming from the human/system source alleged (e.g., the laboratory technician or physician that actually created the entry in the first place).

I anticipate that the future of authentication will be to move towards far more sophisticated and multi-level authentication (even though the biometric movement seems to have lost steam, at least in the general consumer market).  For example, instead of or in addition to a username/password, systems may also generally implement a token, or other physical card to grant access (such systems exist and are in general use today for some systems).  Other security measures may involve thumbprints or biometrics.  I would also imagine that more sophisticated encryption algorithms could be used beyond 128-bit cipher, and that encryption might occur at a more basic level than it does today (if transmissions are encrypted at all).  For example, databases themselves may be encrypted at a record or table level, or application access could be managed through an encrypted socket instead of plain text as many operate now.

Beyond user access to put in data, surely there be could some additional layer of verification that could occur once data has been received from a producer system which could be, by design, independently verified before being committed to the receiving system.  The alteration (or just erroneous entry) of data in transport from one system to another creates the real possibility of a bad health care decision by professionals using the data.  This is certainly one of the major weaknesses of consumer level HIEs such as those from google or Microsoft which must rely on the consumer to enter their own lab and pharmaceutical information into the database when that data is not available electronically, or on data providers that rely on administrative or clerical staff to actually do the data entry without further review before distribution.

HIE Backup and Disaster Recovery

Today, a number of technologies exist that allow for data backup and redundancy to ensure that systems can be highly available and resistant to significant environmental or system disasters.  One category of technology that addresses redudancy is called cloud computing, which is a kind of modern equivalent to what application service providers (ASP) of the 1990’s were offering, or what the ancient mainframes of yesteryear offered to computing users back in the bad old days of the 1970’s.  What is fundamentally different today, however, is the possibility of having massively redundant and distributed information systems that belong to a cloud, where both ASPs and mainframe computing were often centralized into one server room or series of server rooms in one facility.

A common example of computing in the cloud today is gmail, which is an email service provided by google for free to consumers.  There are still, somewhere, servers connected to the internet and controlled by google that will respond to SMTP requests, but google most likely has these servers distributed all over the planet and connected to a larger, redundant network infrastructure.  Data stored on these servers are likely real-time replicated so that all gmail replication partners are up to date, regardless of which one you actually connect to when you use your web browser to navigate to your email account.  Gmail has been around for some time now, and there are a fair number of users (26 million according to one article as of last September; wikipedia claims there are 146 million gmail users each month as of July 2009).

However, even gmail has outages, in spite of the sophistication of its backup and redundancy.  These outages are inconvenient to email users, but could be fatal if relied upon for data in emergency rooms.  And local EHRs undoubtedly fail more often than much larger, hosted solutions.  Perhaps the incentives in the market for HIEs and EHRs will push us into a new age of reliability in IT, based on cloud computing ‘2.0’.

Future is Fuzzy

While it is not clear what may happen as more data is available, I can say that the amount of money on the table under the ARRA, in state budgets and privately in the hands of organizations like Microsoft and Google is pushing health information exchanges into the forefront of health IT initiatives.  More information being available and shared that is accurate and adequately protected is very likely to improve health outcomes and increase the efficient delivery of health care.  My hope is that we can solve some of the more nagging technical and privacy concerns in the short term.

Implementing Your Electronic Health Record System

Health IT has been put back into the forefront of the Obama national health care initiative, in part because of financial incentives built into the ARRA for health care providers that implement and meaningfully use a health technology system in the next few years. The cost savings are premised in part on the success of the installation and implementation of the information system to be used by health care providers. This article will focus on some of the details of implementing an electronic health records system, along with some of the pitfalls that can keep a project from being completed successfully.

The End Goal is Meaningful Use
In order to receive reimbursement from the Medicare or Medicaid program, the ARRA requires that a provider demonstrate meaningful use of the system, connection to a health data exchange, and submission of data of clinical quality measures for patients at the practice. (See my blog for more details here) Reaching these goals goes beyond the mere technical installation of some computer system; meaningful use in particular will likely require health care providers to show that they actually use the computer system in managing patient care, reducing errors, and improving health outcomes for individual patients.

Getting there requires effective planning for the project and a productive implementation process.

The good news for providers who want to implement an electronic health record is that: (a) the data a provider needs to effectively see patients will be available when you need it (no more lost chart syndrome), (b) the chart documentation will support the diagnosis and E&M codes billed to the insurer, (c) electronic health records can be tightly integrated with a practice management system to reduce data entry errors and improve billing, (d) most electronic health records will make clinical or mandated reporting easier as compared to paper charts, (e) lab results can be electronically imported into the electronic health record from major lab providers, (f) improved E&M coding can lead to better reimbursement, and (g) an electronic health record investment can be viewed by your staff as an investment in them, leading to higher staff retention rates and satisfaction. But there is a cost to achieving these benefits.

For one, some of the office workflows for handling patient care may need to be modified or adjusted to incorporate the electronic health record. Some workflows that operate on paper in an office will not convert efficiently to a computer system. Forms used to process or document patient care may also need to be modified when they are converted into the electronic health record.  Electronic health record installations for health care providers tend to expose workflow problems and breakdowns that require attention in implementation for the project to be successful.

Secondly, all the staff in the office will need to be computer literate, and generally, physicians and other health care providers will need to be able to use a computer effectively while examining their patients. This has become less of an issue as more doctors and other providers are trained to use a variety of computer systems at medical school, but computer literacy is still a major issue for some practices in the nation.

Third, electronic health record projects are high risk – there is a substantial chance that the project will be derailed for any number of reasons, including a lack of a process for effectively making key decisions, office politics, the capital expense to acquire computer hardware and software, and a lack of technical expertise among the implementation team, among other challenges. These can be overcome or at least mitigated by sufficient advanced planning by the organization.

And finally, most studies of electronic health record installations suggest that your practice will be in the minority of practices using an electronic health record (though there has been an improvement in the market penetration here over the last few years). This is partly because of the expense of implementing the systems, and the longer-term costs of maintaining them.

You can get there if you have a good plan.

Manage Expectations Early and Often
No, an electronic health record will not solve your workflow problems without your help. An electronic health record is not free, even if licensed under an open source software license. The data that is collected in the electronic health record is useful, but will require further technical assistance to be useful for research or analysis. Staff can’t keep doing things the same way and expect a different outcome (besides this being one definition of insanity, electronic health records are not magical beasts with wings, and magical thinking does not lead to a happy end user). Doctors won’t be able to see 50 patients per day after install if they were only able to manage 20 per day before. A project that lacks goals that are attainable will fail.

Any system project can be a victim of unreasonable or unrealistic expectations. Those leading the project need to be frank about what can be achieved and at what cost to the staff using the electronic health record. Expectations can be managed by establishing tangible goals and having a workable project plan with real milestones and a clear assessment of the resources (financial and staff time) that will be needed to reach each one. For example, implementing the electronic health record two months from purchasing it can be realistic, but only if the provider’s office is prepared to commit significant time to the planning and installation, particularly in identifying forms that need to be developed electronically and lab interfaces that need to be installed (two of the most time-expensive portions of an electronic health record implementation). The need for effective training can also not be understated – staff should not expect they can pick up use of the system in an hour or two, or learn as they go with live patients in the room.

Picking an Information System
Finding the right electronic health record is an important task and should not be left to chance. There are a lot of electronic health record vendors in the market place today with a variety of installations, history, and effectiveness. Developing a written request for proposal and requiring an objective process for evaluating responses to the RFP is essential to fairly evaluate the vendors in the market place. Sending the RFP out to 100 vendors is also not helpful, nor is having a 100 page requirements section. But your prospective partner for this project should be able to effectively respond to your RFP and explain in satisfactory detail what the options and costs are for implementing the proposed system.

Furthermore, your organization should form a search committee that is comprised of enough staff to provide meaningful input on the responses to the RFP, and to interview qualified vendors to assess for the needs of the essential practice areas. Vendors should also be able to competently demonstrate their project to the committee’s satisfaction, so that the committee can identify the best two candidates for the job.

To help encourage staff buy-in (where your facility is sufficiently large that the search committee may not represent all interests), I have also recommended that the finalists demonstrate their product to all staff, and to put the final decision to a group vote. This doesn’t work in all organizations, but the more effort you put into including the staff that use the system in the process, the more buy-in to the project you will garner, which increases the odds of a successful implementation.

Vendor Negotiations
Once you have identified the best candidate electronic health record, your organization should begin to examine the terms of the contract with the electronic health record vendor. Most vendors have a standard form contract that describes the terms of the relationship, particularly for ongoing support and updates to the product. These contracts are complicated and an attorney can be helpful to ensure that the contract fairly represents the relationship, costs, and promises made by the vendor along the way.

Negotiations can take some time to complete, particularly where multiple parties are involved or there are substantial costs involved. Hammering out contract details with the vendor is an important step in the planning process.

Major Milestones
Once a vendor has been chosen, most electronic health record implementation project plans will have the following major milestones to get to a successful go live: (a) form a planning committee, (b) form a technical team, (c) review and make decisions on the requirements for the project, (d) install the server, software, and workstation software, (e) develop all required clinical content (such as electronic forms, flowsheets, and data requirements) for go live, (f) implement all interfaces for data flowing in and out of the electronic health record, (g) conversion of all charts from paper into the electronic health record, (h) staff training completed, and (i) go live with the system.

The planning committee should include the clinical departments that will be using the system, and should be designed to regularly meet up to and through the go live date. The committee should be charged with enough authority to make decisions about the project’s implementation, and should become your initial group of super-users or staff with more training about the electronic health record. Your super users should then become sources of information for the rest of the staff as they work through integrating the electronic health record into their practice.

The technical team is comprised of the IT staff that are responsible for installing the server and workstation equipment, getting the electronic health record software and database installed properly, configuring interfaces between systems, and installing any supporting network or peripheral technology. This team should regularly report to the planning committee or the project manager for the installation.

The planning committee is responsible for making the decisions about how the electronic health record will be implemented. The vendor supplying the system should regularly participate in the committee’s meetings, and generally the project manager should chair the committee. Actions and decisions of this committee should be documented and distributed to the members. In my experience, the meetings of the committee or geared toward training the members on the details of the electronic health record so that they can determine how the system should work for their departments. These meetings can be contentious as a number of people will need to agree, but in the longer term, this process helps to make sure that the project is implemented appropriately.

This committee also should be responsible for identifying project priorities. The reality is that no electronic health record implementation can go live with every request ready – there are always too many requests and not enough time to implement all of them. This committee should be prepared to identify what’s most critical and clarify these priorities to the staff involved in the installation.

In addition, this committee should be committed to be thorough and address concerns along the way with specific implementation decisions and priorities. Some decisions made early on can be very time consuming and costly to correct later.

The clinical content of the application includes the electronic forms that will be used to document care, the organization of the sections of the electronic health record that display structured data (such as lab results for a patient), and other functional areas of the electronic health record that are susceptible to modification at implementation. This development may be handled by the vendor. However, post-go live may require the provider to maintain the content developed during implementation, or be in a position to add new content. In some cases, third parties may be able to sell pre-made clinical content separately from the electronic health record vendor. All of this customization of the product requires special attention to ensure that the content developed meets user requirements and that the content is developed according to standards acceptable to standard practice.

Most electronic health records support some interfacing with other products, using a common language like HL7. If interfaces with other software or third parties is essential to the implementation, substantial lead time and attention to detail is required for these interfaces to be ready at the go live date for the project.

Some meaningful portion of the existing paper charts will need to be converted to electronic format into the electronic health record, prior to go live if at all possible. This is a very time-intensive process, and is often used as a training opportunity for users, who can be scheduled to convert specific charts as part of learning how to use the electronic health record. However, most practices have many more charts than users available to convert them, and many project planners will budget additional resources to aid in the paper conversion process.

Some practices opt to extract specific data from a paper chart into electronic format, using specialized clinical content for this purpose. Other practices may simply scan and index the paper chart documents as is into an electronic document and attach it to the chart as the chart history. Still others will do a hybrid of these two solutions.

Training is also a very important aspect of any electronic health record implementation. From my experience, up to 20 hours of training may be required for super users of the electronic health record; the minimum is about 4 hours for sufficient exposure to the basics of an electronic health record. Depending on the total staff to be trained, scheduling training classes for an organization may be a substantial time commitment. Generally the electronic health record vendor can give guidelines on the minimums for training to gain proficiency on the system. Note that no implementation’s training will end at go live; generally post go-live training and ongoing training for new staff after the system is implemented are ongoing expenses of the electronic health record.

Greening IT Through Virtualization

Technology continues to evolve, providing people with new functionality, features, information, and entertainment.  According to Ray Kurzweil, a number of metrics for computer performance and capacity indicate that our technology is expanding at a linear or exponential rate.  Sadly, the physical manifestations of technology are also helping to destroy the planet and poison our clean water supplies.  According to the EPA, nearly 2% of municipal waste is computer trash.  While an improvement in recent years, only 18% of computers, televisions, and related solid waste is actually recycled by consumers, placing millions of tons of unwanted electronics into landfills each year.  Businesses contribute to this problem each year as they are major consumers of computers, printers, cell phones, and other electronics to operate their business.

Computers that are placed into a landfill pose a significant environmental threat to people and wildlife.  Electronics can contain a number of hazardous materials, such as lead, mercury, cadmium, chromium, and some types of flame retardants, which, in the quantities of disposed equipment, poses a real threat to our drinking water.  See the article here with the details. Lead alone in sufficient quantities can damage your central nervous system and kidneys, and heavy metals in your body will be retained such that over time you accumulate more of the substance until your body reaches a threshold over which you may experience fatal symptoms.  See Lead Poisoning Article. Mercury, cadmium and chromium aren’t any nicer to people or animals.

Everyone should recycle their electronics through a respectable electronics recycler (See Turtle Wings website for example).  However, you can also reduce your server fleet and extend the life of your computer equipment through virtualization.  (See an earlier post on virtualization on my blog).  Virtualization of your server equipment means that you will use fewer physical servers in order to present more virtual machines to your user community for accessing print, authentication, file sharing, applications, web, and other computer services on your network.  Fewer servers in use means that you will have fewer physical server devices to purchase over time and fewer servers to recycle at the end of their life.  Virtualizing your desktops can help by extending the useful life of your desktops (they are just accessing a centrally stored virtual desktop, on which all the processing and storage occurs, so a desktop with little RAM and CPU will work for longer), and also reducing the amount of electricity that your organization uses per computer (if you then switch to a thin client such as a Wyse terminal or HP computing device).

Virtualization can also improve your preparedness for disasters, whether by flood, virus, or terrorist.  For one thing, backing up the data file that represents your virtual servers is easier, can be done during normal business hours, and can be far more easily replicated to another site than the contents of a physical server.  Furthermore, virtualization can reduce the entry costs to implement a disaster recovery site because you can use less overall equipment in order to replicate data from your production environment, so your ongoing operating costs are reduced as compared to a physical server configuration.  Testing upgrades is easier because you can duplicate a production virtual server and test the upgrade before rolling it out to the live system (which costs less than buying another physical server and running a copy of the system on it to run the testing).  Virtualizing desktops also simplifies some of the support and administrative tasks associated with keeping desktops running properly (or fixing them when they stop working right).

So, before you buy another physical desktop or server, think about whether virtualization can help save Earth and you.