Lessons From IT Management: Introduction

For the last ten years, I worked for a health center that serves several underserved populations: the gay and lesbian community, HIV positive patients, and patients that lack sufficient health care.  Over that time, we have built a complex and extensive information system to help support the mission of the organization.

This series is about how technology can be integrated into the delivery of health care, and the problems that come up along the way in getting the technology to work.  I suspect that technology causes suffering for some in spite of our best efforts to the contrary.  But our purpose in implementing technology is to reduce suffering by passing repetitive tasks to the computer while increasing the amount of time available to people to do what they are good at (like doctoring, lawyering, and so on).  Within healthcare, automation can also reduce patient suffering by reducing errors (for example, by ensuring accurate prescriptions, or reducing the number of times the same data must be entered into systems that support patient care), which should improve the quality of care that patients receive from their physicians.  When used properly, technology should also bring relevant knowledge to the user as they are doing their job (by making negative drug interactions known to a prescriber, for example).

But technology can cause trouble for users that were perfectly happy with their paper documents. The transition to an electronic system from paper can be tricky; moving from one computer system to a newer one can also pose real challenges.  This series is meant to help technologists and users out there in the world to avoid some of the common pitfalls with technology as both start full steam in implementing health IT to take advantage of the incentives in the ARRA.

This series is also about the place where the rubber of our lofty humanitarian and economic goals meet the road of personality disorders, unreasonable expectations, and inefficiency – which is to say the path to get a computer system working for the people that will ultimately use it.  For the technologist, I do not think you can avoid the road (there are not yet helicopters in the arena of health IT implementation – though one day there may be), but you may at least find some solace in the fact that you are not the only one to have traveled this path.  For end users that might happen to read this book, you might perhaps recognize a peer or yourself in this book and gain some insight into why your IT staff always seem to grumpy.

While others have contributed to the subject matter, any mistakes in this series remain solely those of the author.   Please feel free to contribute by making comments on the blog.  And good luck to those of you implementing technology.

Lessons from IT Mgmt Chapter 1: Information Insecurity

To improve the security of our network, we decided to close port 3389 and no longer publish a Windows terminal server to the internet.  In order to continue to support remote access to our network, we implemented a secure socket layer (SSL) virtual private network (VPN) device that allows users outside of the corporate network to create an secure tunnel into the network.  As implemented, end users were required to use a particular operating system, and to follow relatively simple instructions to install a small program that would initiate the tunnel from the home user’s workstation to the corporate network.  Authentication relies on the existing Active Directory accounts so that users didn’t need another login.  The appliance also allowed us control over which accounts could have remote access, so we could limit the known trouble accounts, such as guest and administrator, from having access to the protected network.

By corporate policy, remote access was originally designed for clinicians to be able to access patient medical records while the clinician was on call.  Over time, end users have been able to use remote access to work in the comfort of their homes, whether on call or not.  However, in no case has the corporation required that end users be able to work remotely as a matter of course, except for a few traveling staff that work during the day at a third party facility.

Nonetheless, users had gotten into their heads that working from home was a right, not a privilege.  And with that right flows the obligation on the part of IT to support the home user’s network configuration.  The change, therefore, by IT to the method of access to the remote network was unwelcome by some and was met with resistance, even if the new method was in fact more secure, recommended by our outside information security consultant, and addressed a major security vulnerability within our network.

There were really two important lessons from this experience.  First, proactive communication and involvement of the user community in implementing changes to remote access is an important element to the implementation plan.  I suspect some would still have grumbled at the change, but we may have headed off some of the complaints simply by better explaining why the change was being made.  Second, remote access had grown organically over time such that a lot of people were using it on a wide variety of home computers and home networks.  Many of the staff were not particularly competent at using their home firewalls, routers, or other network devices if the users needed to make changes to these devices to access the corporate network.  We also underestimated how diverse and how much configuration could be required in order for the SSL VPN device to be able to connect to our network and establish the tunnel for secure communications.

We also discovered that the device was not particularly compatible with OSX (there was a guest kiosk function that would work within OSX, but the screen resolution and performance was poor and effectively unusable for most staff that had to be in for longer periods of time).  We had not realized at the time how many staff were actually using Macs at home, so this also caught us off guard.  Of course, Parallels and VMWare both offer virtualized Windows XP desktops (with which the appliance was compatible), but users still complained that they had to implement this in order to access the network.

Inherently, there is tension between user access and security, and it is up to IT management to determine how much pain to inflict upon the users to protect network assets.  Not everyone will be happy with the balance.  In this case, I still think we made the right call, but we didn’t implement according to a complete plan.  Next time will no doubt be better.

Reducing Health Care Inefficiencies

Can Health IT save us from ourselves?  See the Yahoo Article on another assessment of health care spending in the U.S. and how much money is wasted in service delivery costs.  According to this article, about 1/2 of the spending of the U.S. on health care is wasted in inefficient use of resources.  Now, if you read the article, you will note that the top 8 items on their list only add up to about $600 billion, where the claim is that $1.2 trillion is lost (and you would think that a bunch of accountants could add, so maybe the journalists misread the fine print on the analysis), but even if you accept that much as the cost of inefficiency, that is more than it costs for the Medicare program in the U.S.

One of the big items on the list is inefficiencies with insurers who “magically deny” claims or otherwise require far too much in order for a provider to get paid appropriately.  I find it interesting that this remains on the list of problems.  In 1996, HIPAA was originally passed by Congress.  Part of HIPAA was to mandate that, through regulation, standards be developed for the electronic transfer of information between insurers and providers of health care, including claims.  The regulations eventually required that all or substantially all providers be able to submit claims electronically, which, one would expect, would be more efficient than the manual processing of paper claim forms.

So, if the auditors suggest that we still are wasting $200 billion per year on inefficient data exchanges with insurers, perhaps this deserves more focus.

Getting paid by insurers happens at the end of the process of service delivery to patients by providers.  At the beginning, patients present to the doctor’s office with a problem, see a Medical Assistant or Nurse for preliminary weights and measures (like blood pressure and weight, etc.), see the physician, CRNP or physician’s assistant, who may then refer the patient to another provider, write a prescription, make other suggestions to the patient, require that the patient get lab work to rule out certain causes, and so on.  At the conclusion of the visit, the physician will document, diagnose, and generate a financial transaction that must be processed and submitted to an insurer for payment.

The patient then will see other providers, the lab, the pharmacy, and perhaps come back for a follow-up visit with the physician.  All of the steps in the process involve data transfer between several information systems, often housed in several different facilities, with different standards and different purposes.  A key for a physician to get paid at all is to have accurate insurance information about the patient.  Surprisingly, patients are not necessarily the best source of this information.  However, insurers are apparently no better at knowing this on average.  Otherwise, it would follow that we would already have regional databases or a national database of eligibility data available for all providers.  I assert this because the standards for eligibility data have been around for a fair amount of time in the form of the ANSI X12 standard, but still there is a fair amount of lost dollars in the claims processing area of health care.

Perhaps this is so because providers want to get paid but insurers don’t have a good reason to pay them.  Insurers do benefit from holding onto capital to accrue interest on it.  The longer an insurer can do this, the more interest on the investment they collect, which goes straight to their bottom line.  ARRA’s incentive system requires that physicians meaningfully use health IT and participate in some form of a health information exchange.  But there is no comparable set of incentives for insurers to participate in HIE’s, or to incentive providers.

For example, this could be achieved by insurers preferring providers with health IT in place compared to those that don’t.  Another example would be for insurers to pay incentives to providers for a higher degree of clinical outcomes (only possible if the providers can produce useful and independently verifiable data such as lab information, which is really only possible through the use of an HIE).  The market may figure this out on its own, but I honestly doubt it.  Perhaps the feds will pick up on this market failure and intervene to start improving efficiencies in this area in either health reform now or in ARRA part II in the next several years.

Understanding Health Information Exchange

I recently attended a break out session at the Centricity Healthcare User Group (CHUG) Fall meeting in Washington D.C. on health information exchange (HIE) in Oregon.  HIE is an information system that allows individual health care providers to exchange data with trusted partners about patients shared among the partners.  For example, clinical data about a patient in Oregon that goes to a local hospital and a specialty physician (like a cardiologist) can be shared via an HIE.

Health IT compatibility with an HIE is also a American Recovery and Reinvestment Act (ARRA) requirement for providers to receive incentive payments from Medicare or Medicaid under the Act.  That means that a physician practice that buys an information system will need to contemplate how that system could interact with an HIE to be eligible to receive the $44,000 to $64,000 in investment money available through the ARRA starting in 2011.

HIE is an obvious step forward with health technology.  Health IT started out as a localized solution to manual or paper processes in the offices of individual providers back in the bad old days of computing.  That meant that the data collected was stored locally and was unavailable to other systems or individuals who might find it useful.  For example, if you go to the doctor’s office, the office “registers” you to their system (paper or electronic), which means that you provide specific demographic information to the registrar.  In the bad old days, you’d do this repeatedly every time you went to another physician or ancillary service (like the lab, radiology, etc.).

HIE presents a way for trading partners to be able to share this data, which helps to reduce the amount of time spent filling out forms for patients, and also reducing the administrative time required by each medical facility in keeping track of your address and insurance information.  And, it should also increase the overall accuracy of the data stored across systems.

HIE also is aimed at clinical information.  If you have a primary care and a specialist physician, both ought to know what medications you are taking, regardless of who the prescriber was.  HIE provides a way to share this information automatically.  The folks in Oregon have also implemented their HIE to support sharing problems and allergies, and it sounds like they are planning to implement methods to share other kinds of data in the future.

Maryland is in the process of developing its own HIE (see this press release from DHMH); HIE was a legislative priority to the Maryland General Assembly this year, which did result in a law mandating HIE in some fashion over the next several years.  (See blog post on this topic here)  More to come!

Second Life – A Virtual Law Office

Second Life, (click here for their main web site) an online virtual worlds system, is now home to my other law office, which you can visit anytime you like by simply loading Second Life on your computer and going to these coordinates: Mullett (104, 16, 150).  For those of you that are used to online gaming, Second Life will be second nature to you.  For everyone else, Second Life is kind of a whacky place.  For one thing, your “avatar” (a virtual representation of you in the virtual world) can fly and teleport.  You can “own” real estate in Second Life as part of an annual, paid subscription to the system makers at Linden Labs (though, arguably, because the ownership of virtual real estate in Second Life is contingent on you paying your annual dues, the property there is probably not very “real” real estate, unless you think of your dues as property taxes – but that’s for another day).  But you should see some of the things that people have constructed on their property in Second Life – not all that well-connected to things you might be used to seeing in the real world.

For one thing, my neighbor has his own, private castle cattycorner to me in Mullett.  As you might expect from the internet generally, (especially if you watched Dave Chappelle’s Show where he hypothesized what the internet would be like if it were a mall), there are plenty of “mature” places to visit in Second Life, offering all sorts of “services” for your avatar that might make your mom blush.  And, just like the modern internet has become, Second Life is also full of shopping malls of various sorts, all selling a variety of accoutrement for your avatar (hair, clothes, shoes, magic underwear – you name it, you can probably find it).  And there are a fair number of real estate salespeople who are trying to resell virtual real estate within the system.  Interestingly, there is also a “real” world market of virtual items for sale (not unlike other online gaming systems such as World of Warcraft) on other online services like ebay (there were about 120 items related to “second life” for sale today when I searched ebay).

Second Life does allow users to join without making a payment.  You start to rack up fees when you actually wish to own virtual real estate within the system.  As time has gone on, more people have begun to use Second Life as another web conferencing system (I attended an ABA conference on virtual worlds in 2008 in Second Life).  In addition, larger IT companies like IBM and Intel have begun developing their own web presences in Second Life.

Also of note is that there is a currency that is managed by the creators of the system, Linden Labs, known as linden dollars.  Second Life maintains an exchange system that allows you to use real currency to purchase linden dollars, and also allows you to export linden dollars back into real currency using the credit card or paypal account associated with your Second Life account.  As a result of this connection with the physical world, there are a number of users that make an actual living in Second Life producing virtual goods for their fellow Second Life denizens.  According to the Second Life site today, about $52 million worth of linden dollars and real dollars were exchanged on the Linden Labs exchange system, so it is fair to say that a substantial amount of commerce is ongoing at Linden Labs – in spite of the national recession.

And, once you get past the seedy parts of town, there are a fair number of very interesting things going on within Second Life.  For example, I visited a school that is being constructed within Second Life, Rockcliffe University in Rockcliffe 182, 4, 24.  There are newspapers and magazines available within Second Life (though I suspect that many of these are struggling with their physical world counterparts in the recession).  And there is an abundance of art available for purchase throughout this virtual world.  So what is an IT attorney to do but open up his own Second Life law practice?  Come visit us at Mullett 114, 14, 153!  And I’ll be posting further news from the virtual world here on this blog.

You Aren’t Working Hard Enough

Lessons from IT Management: Prologue

Published in serial fashion, the blog posts in the Technology | Management section of this blog are some thoughts on managing an IT department from an insider’s perspective.

This series is about where the rubber meets the road when it comes to implementing technology for a lowest common denominator of sorts: office employees.  My hope is that others may learn from our mistakes, perhaps feel a bit of catharsis for those that do this stuff for a living, and examine ways that technology itself may be able to reduce the occurrence of some of the things going on in offices all over the world today.  And I also hope that you will laugh from time to time at some of the tales told tall in this series.  Sometimes IT staff and users want things that are just plain silly.

Even though others have contributed to this series (and some, unwittingly), any mistakes that may remain in the text are mine alone.  Please feel free to comment or contribute if you are so inclined, and enjoy the epic saga that follows!

The Future of Health IT

An important aspect of President Obama’s health plan (partly funded through this year’s stimulus package) is health technology.  As noted in a prior blog post, section 4101 of the ARRA provides qualifying health providers with Medicare reimbursement incentives for implementing Health IT that meets the statutory criteria set out in that section: meaningful use, participation in a health data exchange, and clinical outcomes reporting.  By setting standards and providing incentives, the federal health policy will have a substantial impact on health care technology over the next five years, as billions of dollars are poured into health IT investment.

The question presented here is where will all of this public and private investment lead Health IT in the next few years?

Data Exchange And Availability

One of the areas of major emphasis in the ARRA is the ability of health care providers to more easily share information with other providers, patients, and others who have a legal right to receive such data.  In particular, emphasis has been placed on the ability to transmit data to health exchanges, and to be able to produce data for the Feds on health outcomes (such as reporting hemoglobin a1c’s values over time to evaluate if a diabetic patient is responding positively to the care provided).  Health data exchanges today are on the rise, according to ihealthbeat.org, up to 57 operational exchanges from 42 the year prior.  These health exchanges are being used to exchange data between individual providers in an effort to improve care coordination and to improve care quality.

More specifically, for patients with several doctors who may specialize in a variety of treatments or health conditions, health exchanges have the potential to ensure that lab data ordered by one physician is made available in a secure and reliable manner to all the physicians involved in providing health care.  Health exchanges also can ensure that a patient’s medical history (particularly their prescription history) is available in a consistent format to all care providers, saving time at each visit and reducing risks to patients that might forget a prescription or past medical procedure.  Sharing lab results also has the potential to reduce costs and patient injury by reducing the number of duplicative tests ordered for the same patient by different providers.  This is a common problem for patients with a coordinating care provider who end up in the hospital and the attending physician is stuck ordering duplicate tests.

Looking into the future, I would expect that health data exchanges (HDE) would become more prevalent so long as the total cost to implement and maintain the HDE are less than the costs saved/avoided by the data available in the HDE.  One of the other factors that will impact the growth of HDEs is the number of peer-reviewed studies of their efficacy.  Today, there is relatively little information on this topic because most HDEs are new or still under development, but in the next few years more definitive information should be available for analysis and review by eager technologists and researchers.

One of the great challenges for the HDE movement is maintaining patient privacy.  HIPAA was originally implemented in part to specifically address patient privacy, as have a number of other state laws on this topic (for example, the Maryland Medical Records Act, See Md. Health-Gen. Code Ann. § 4-301 et seq.).  And other states are getting in on the action to protect consumer privacy, including Massachusetts, Minnesota, and Nevada, just to name a few.

However, laws alone may not be enough to effectively regulate and protect the availability of health data.  In the present HIPAA enforcement regulations (which were modified by ARRA this year), the top fines (where the act in violation of the security regulations was a negligent rather than an intentional one) are relatively low compared to the potential size of an HDE (for example, if a company like google or Microsoft was to become a dominant HDE) because the fines are a flat rate per incident rather than being scaled according to the company’s gross revenue or the severity of the breach or finding.  The ARRA did move in the right direction this year by implementing a four-tiered approach to violations from the original enforcement authority under HIPAA, but further scaling may be required for this to become an effective deterrent to lax security practices.

Furthermore, having a patchwork of privacy laws increases the overall cost of compliance for HDEs, which increases the cost to implement these systems without necessarily improving the actual security of the information stored at the HDE.  This is caused by regulatory requirements of overlapping but also potentially conflicting laws, along with the need to respond to multiple authorities with the right to audit or investigate the HDE (as larger HDEs will undoubtedly operate across state lines).  Sadly, I imagine that this problem will probably get worse before it gets better, given the number of relatively autonomous sovereign powers within our country (5o states + the federal government) and the scope and scale of the privacy issue being considered.

Attitudes Towards Privacy

Our future privacy policies may also be impacted by the attitude of our youth to privacy today.  Social networking sites, for example, allow for the exposure of a lot of information about the youngest among us, but the predominant users of these systems don’t seem to mind very much.  Now, of course, facebook is not known for the health data available on its users, so who knows whether college kids would be posting their latest hemoglobin values as readily as they do about the parties they were attending and pictures snapped of them by the college paparazzi, but it stands to reason that the next generation’s attitudes towards privacy will be substantially different than the present one that has been called to govern the nation.

The result may be a reduction in the concern about privacy with an increasing criminal penalty for those that engage in theft of information.  For example, perhaps instead of worrying as much about whether health data is squirreled away in an underground bunker with Dick Cheney, the future leaders of the nation will make this data generally available via the internet, ultimately reducing its value to would-be thieves.  For myself, I can’t say it matters much if others know than I have high cholesterol and a family history of diabetes, but I also don’t think there is much stigma attached to either of these conditions as there might have once been (or might still be for other health issues).

Data Quality and Trusted Sources

HDEs will also need to address head on the quality and reliability of data stored in their databases.  Today, data systems do not generally go beyond the initial setup of some kind of private network and the file formats that are acceptable for data to be exchanged.  Inherently, one system trusts the data it receives from the other and merely re-publishes it into its own database, identifying the source of the data.  Usernames and passwords may just not be enough for everyone to know that the data being sent or received is accurate and reliable.

In addition, even though HIPAA (and some other laws) have placed a small emphasis on technical encryption, the truth is that little has really been done with these technologies for most systems today to ensure that data entered is not repudiated later by the person that purportedly entered it.  For example, many commercially available database systems are not natively encrypted.  Local area network activity on the wire is rarely encrypted, again relying on the border security devices to keep outsiders out of LAN activity.  Passwords are not consistently complex across an enterprise (especially where multiple database systems maintain their own passwords and accounts), and certainly cannot reasonably be changed frequently enough to ensure the password has not been compromised (without the user community revolting against the IT staff).  And users routinely share passwords even though there is a federal law against it and in spite of the numerous repeated messages from system administrators to not share passwords.

Furthermore, data exchanged between systems relies on the initial configuration of the networking that connects the two systems to remain uncompromised.  There is no further system verification to ensure that messages actually received across these systems are correct in the typical data exchange design.  TCP itself was designed with a checksum in each packet, but that only tells the receiver if the packet received matches what was intended to be sent, not whether the data sent is coming from the human/system source alleged (e.g., the laboratory technician or physician that actually created the entry in the first place).

I anticipate that the future of authentication will be to move towards far more sophisticated and multi-level authentication (even though the biometric movement seemed to have lost steam, at least in the general consumer market).  For example, instead of or in addition to a username/password, systems may also generally implement a token, or other physical card to grant access (such systems exist and are in general use today for some systems).  Other security measures may involve thumbprints or biometrics.  I would also imagine that more sophisticated encryption algorithms could be used beyond 128-bit cipher, and that encryption might occur at a more basic level than it does today (if transmissions are encrypted at all).  For example, databases themselves may be encrypted at a record or table level, or application access could be managed through an encrypted socket instead of plain text as many operate now.

Beyond user access to put in data, surely there be could some additional layer of verification that could occur once data has been received from a producer system which could be, by design, independently verified before being committed to the receiving system.  The alteration (or just erroneous entry) of data in transport from one system to another creates the real possibility of a bad health care decision by professionals using the data.  This is certainly one of the major weaknesses of consumer level HDEs such as those from google or Microsoft which must rely on the consumer to enter their own lab and pharmaceutical information into the database when that data is not available electronically, or on data providers that rely on administrative or clerical staff to actually do the data entry without further review before distribution.

HDE Information Security

Today, a number of technologies exist that allow for data backup and redundancy to ensure that systems can be highly available and resistant to significant environmental or system disasters.  One category of technology is called “cloud computing,” which is a kind of modern equivalent to what application service providers (ASP) of the 1990’s were offering, or what the ancient mainframes of yesteryear offered to computing users back in the bad old days of the 1970’s.  What is fundamentally different today, however, is the possibility of having massively redundant and distributed information systems that belong to a cloud, where both ASPs and mainframe computing was often centralized into one server room or series of server rooms in one facility.

A common example of computing in the cloud today is gmail, which is an email service provided by google for free to consumers.  There are still, somewhere, servers connected to the internet and controlled by google that will respond to SMTP requests, but google most likely has these servers distributed all over the planet and connected to a larger, redundant network infrastructure.  Data stored on these servers are likely real-time replicated so that all gmail replication partners are up to date, regardless of which one you actually connect to when you use your web browser to navigate to your email account.  Gmail has been around for some time now, and there are a fair number of users (26 million according to one article as of last September; wikipedia claims there are 146 million gmail users each month as of July 2009).  Perhaps Health IT will be the next internet “killer app.”

And looking down the road, the future of Health IT likely involves some kind of “cloud computing” model where health data is not stored locally on an organization’s server.  This model will provide for additional flexibility with data transfer, improved system redundancy, and higher availability than is typically possible in a single enterprise or within a single server room.

Cloud computing, however, does pose other security and privacy concerns.  (See this article on CNET that addresses some of these same concerns)  For example, will staff of the cloud computing service have some kind of access to the actual data entered into the system?  Will these systems have a way of keeping those administrators from changing or accessing data (for example, by encrypting the data to place it out of reach of administrators)?  Who is liable for loss of the data?  Will the HDE seek to (and will courts and lawmakers allow it to) unreasonably limit liability for unauthorized access?  Will the HDE be indemnified by a government agency?  Will the HDE pay for itself by allowing advertisers access to data stored by the HDE?  Will it utilize a more democratic approach (for example, as facebook has recently been employing to ratify adoption of changes to policies in place that affect its user community)?

Stay tuned.

Health IT Implementation – an Overview

Health IT has been put back into the forefront of the Obama national health care initiative, in part because of Medicare incentives built into the ARRA for health care providers that implement and meaningfully use a health technology system in the next few years.  The cost savings is premised in part on the success of the installation and implementation of the information system to be used by health care providers.  This article will focus on some of the details of implementing an electronic health records system, along with some of the pitfalls that can keep a project from being completed successfully.

The End Goal is Meaningful Use

In order to receive reimbursement from the Medicare program, the ARRA requires that a provider demonstrate meaningful use of the system, connection to a health data exchange, and submission of data of clinical quality measures for patients at the practice.  (See earlier post on this issue)  Reaching these goals goes beyond the mere technical installation of some computer system; “meaningful use” in particular will likely require health care providers to show that the actually use the computer system in managing patient care, reducing errors, and improving health outcomes for individual patients.  Getting there requires effective planning for the project and a productive implementation process.

The good news for providers who want to implement an EHR is that: (a) the data a provider needs to effectively see patients will be available when you need it (no more “lost chart syndrome”), (b) the chart documentation will support the diagnosis and E&M codes billed to the insurer, (c) EHRs can be tightly integrated with a practice management system to reduce data entry errors and improve billing, (d) most EHRs will make clinical or mandated reporting easier as compared to paper charts, (e) lab results can be electronically imported into the EHR from major lab providers, (f) improved E&M coding can lead to better reimbursement, and (g) an EHR investment can be viewed by your staff as an investment in them, leading to higher staff retention rates and satisfaction.  But there is a cost to achieving these benefits. 

For one, some of the office workflows for handling patient care may need to be modified or adjusted to incorporate the EHR.  Some workflows that operate on paper in an office will not convert efficiently to a computer system.  Forms used to process or document patient care may also need to be modified when they are converted into the EHR.  EHR installations for health care providers tend to expose workflow problems and breakdowns that require attention in implementation for the project to be successful.

Secondly, all the staff in the office will need to be computer literate, and generally, physicians and other health care providers will need to be able to use a computer effectively while examining their patients.  This has become less of an issue as more doctors and other providers are trained to use a variety of computer systems at medical school, but computer literacy is still a major issue for some practices in the nation.

Third, EHR projects are high risk – there is a substantial chance that the project will be derailed for any number of reasons, including a lack of a process for effectively making key decisions, office politics, the capital expense to acquire computer hardware and software, and a lack of technical expertise among the implementation team, among other challenges.  These can be overcome or at least mitigated by sufficient advanced planning by the organization.

And finally, most studies of EHR installations suggest that your practice will be in the minority of practices using an EHR (though there has been an improvement in the market penetration here over the last few years).  This is partly because of the expense of implementing the systems, and the longer-term costs of maintaining them.

You can get there if you have a good plan.

Manage Expectations Early and Often

No, an EHR will not solve your workflow problems without your help.  An EHR is not free, even if licensed under an open source software license.  The data that is collected in the EHR is useful, but will require further technical assistance to be useful for research or analysis.  Staff can’t keep doing things the same way and expect a different outcome (besides this being one definition of insanity, EHRs are not magical beasts with wings, and magical thinking does not lead to a happy end user).  Doctors won’t be able to see 50 patients per day after install if they were only able to manage 20 per day before.  A project that lacks goals that are attainable will fail.

Any system project can be a victim of unreasonable or unrealistic expectations.  Those leading the project need to be frank about what can be achieved and at what cost to the staff using the EHR.  Expectations can be managed by establishing tangible goals and having a workable project plan with real milestones and a clear assessment of the resources (financial and staff time) that will be needed to reach each one.  For example, implementing the EHR two months from purchasing it can be realistic, but only if the provider’s office is prepared to commit significant time to the planning and installation, particularly in identifying forms that need to be developed electronically and lab interfaces that need to be installed (two of the most time-expensive portions of an EHR implementation).  The need for effective training can also not be understated – staff should not expect they can pick up use of the system in an hour or two, or learn as they go with live patients in the room.

Picking an Information System

Finding the right EHR is an important task and should not be left to chance.  There are a lot of EHR vendors in the market place today with a variety of installations, history, and effectiveness.  Developing a written request for proposal and requiring an objective process for evaluating responses to the RFP is essential to fairly evaluate the vendors in the market place.  Sending the RFP out to 100 vendors is also not helpful, nor is having a 100 page requirements section.  But your prospective partner for this project should be able to effectively respond to your RFP and explain in satisfactory detail what the options and costs are for implementing the proposed system.

Furthermore, your organization should form a search committee that is comprised of enough staff to provide meaningful input on the responses to the RFP, and to interview qualified vendors to assess for the needs of the essential practice areas.  Vendors should also be able to competently demonstrate their project to the committee’s satisfaction, so that the committee can identify the best two candidates for the job.

To help encourage staff buy-in (where your facility is sufficiently large that the search committee may not represent all interests), I have also recommended that the finalists demonstrate their product to all staff, and to put the final decision to a group vote.  This doesn’t work in all organizations, but the more effort you put into including the staff that use the system in the process, the more buy-in to the project you will garner, which increases the odds of a successful implementation.

Vendor Negotiations

Once you have identified the best candidate EHR, your organization should begin to examine the terms of the contract with the EHR vendor.  Most vendors have a standard form contract that describes the terms of the relationship, particularly for ongoing support and updates to the product.  These contracts are complicated and an attorney can be helpful to ensure that the contract fairly represents the relationship, costs, and promises made by the vendor along the way.

Negotiations can take some time to complete, particularly where multiple parties are involved or there are substantial costs involved.  Hammering out contract details with the vendor is an important step in the planning process.

Major Milestones

Once a vendor has been chosen, most EHR implementation project plans will have the following major milestones to get to a successful go live: (a) form a planning committee, (b) form a technical team, (c) review and make decisions on the requirements for the project, (d) install the server, software, and workstation software, (e) develop all required clinical content (such as electronic forms, flowsheets, and data requirements) for go live, (f) implement all interfaces for data flowing in and out of the EHR, (g) conversion of all charts from paper into the EHR, (h) staff training completed, and (i) go live with the system.

The planning committee should include the clinical departments that will be using the system, and should be designed to regularly meet up to and through the go live date.  The committee should be charged with enough authority to make decisions about the project’s implementation, and should become your initial group of “super-users” or staff with more training about the EHR.  Your super users should then become sources of information for the rest of the staff as they work through integrating the EHR into their practice.

The technical team is comprised of the IT staff that are responsible for installing the server and workstation equipment, getting the EHR software and database installed properly, configuring interfaces between systems, and installing any supporting network or peripheral technology.  This team should regularly report to the planning committee or the project manager for the installation.

The planning committee is responsible for making the decisions about how the EHR will be implemented.  The vendor supplying the system should regularly participate in the committee’s meetings, and generally the project manager should chair the committee.  Actions and decisions of this committee should be documented and distributed to the members.  In my experience, the meetings of the committee or geared toward training the members on the details of the EHR so that they can determine how the system should work for their departments.  These meetings can be contentious as a number of people will need to agree, but in the longer term, this process helps to make sure that the project is implemented appropriately.

This committee also should be responsible for identifying project priorities.  The reality is that no EHR implementation can go live with every request ready – there are always too many requests and not enough time to implement all of them.  This committee should be prepared to identify what’s most critical and clarify these priorities to the staff involved in the installation.

In addition, this committee should be committed to be thorough and address concerns along the way with specific implementation decisions and priorities.  Some decisions made early on can be very time consuming and costly to correct later.

The “clinical content” of the application includes the electronic forms that will be used to document care, the organization of the sections of the EHR that display structured data (such as lab results for a patient), and other functional areas of the EHR that are susceptible to modification at implementation.  This development may be handled by the vendor.  However, post-go live may require the provider to maintain the content developed during implementation, or be in a position to add new content.  In some cases, third parties may be able to sell premade clinical content separately from the EHR vendor.  All of this customization of the product requires special attention to ensure that the content developed meets user requirements and that the content is developed according to standards acceptable to standard practice.

Most EHRs support some interfacing with other products, using a common language like HL7.  If interfaces with other software or third parties is essential to the implementation, substantial lead time and attention to detail is required for these interfaces to be ready at the go live date for the project.

Some meaningful portion of the existing paper charts will need to be converted to electronic format into the EHR, prior to go live if at all possible.  This is a very time-intensive process, and is often used as a training opportunity for users, who can be scheduled to convert specific charts as part of learning how to use the EHR.  However, most practices have many more charts than users available to convert them, and many project planners will budget additional resources to aid in the paper conversion process.

Some practices opt to extract specific data from a paper chart into electronic format, using specialized clinical content for this purpose.  Other practices may simply scan and index the paper chart documents as is into an electronic document and attach it to the chart as the chart history.  Still others will do a hybrid of these two solutions.

Training is also a very important aspect of any EHR implementation.  From my experience, up to 20 hours of training may be required for super users of the EHR; the minimum is about 4 hours for sufficient exposure to the basics of an EHR.  Depending on the total staff to be trained, scheduling training classes for an organization may be a substantial time committment.  Generally the EHR vendor can give guidelines on the minimums for training to gain proficiency on the system.  Note that no implementation’s training will end at go live; generally post go-live training and ongoing training for new staff after the system is implemented are ongoing expenses of the EHR.

Cloud Computing and Other Buzz Words

The technology that drives health care today is changing in response to increase concerns about security and reliability, and external regulations like the security regulations in HIPAA.  In addition, the HiTech portion of the stimulus law this year has provided incentives for health care providers to adopt technology that allows for health data exchange and for quality reporting (which is a data driven process for providing outcome reporting for certain quality measures as defined by the Secretary of Health and Human Services).  There are a fair number of technology vendors that provide electronic health records (EHR) systems today, and also a fair number of vendors that have developed business intelligence or more sophisticated data reporting tools.  Health data exchange is a newer field; google and Microsoft have begun developing systems that allow users to establish a personal health record database, and some states have started planning for larger scale data repositories, but this concept is still at its beginning stages.

A buzz word today in technology is “cloud computing,” which is a fancy way of describing internet systems that businesses can rent from service providers to perform business tasks.  The idea is not new, even if the buzz word is; in days of yore, we called these “application service providers” or ASP’s for short.  I suppose that the IT marketing folks got sick of being compared with a nasty snake and thought clouds were better (or maybe more humorous if they had ever read Aristophanes).  Of course, the perjorative “vaporware” which roughly translates to a software vendor that markets a product it does not yet have to actually sell to people, also rings of clouds and things in the sky.  And the old “pie in the sky” as a way of saying “that’s a nice idea but has no hope of being useful down here where mere mortals live” could also relate to clouds.

That aside, there may be something to cloud computing for us mere mortals.  One of the important aspects of technology is how complex it actually is under the covers, and the degree and scope of support actually required to get the technology to work properly.  Larger businesses that have high concentrations of technology engineers and analysts are better equipped than the average business to deal with technology issues.  In this respect, cloud computing offers a business a way to “leverage” (another business term thrown casually around) the expertise of a fair number of technology experts without having to hire all of them on full time.  One of the dilemmas for business consumers, however, is the amount that one needs to be able to trust the technology partner they rent from.  This is the same problem that ASP’s originally faced years ago.  What happens to the data in the cloud when the cloud computing vendor either stops providing the service you are using, or just goes out of business?  How do the businesses work together on transitioning from one cloud to another, or from the cloud back in-house?  What if the business wants to host its own cloud onsite or at its existing hosting facility?  How are changes to the hosted application controlled and tested?  How often are backups performed, how often are they tested?  How “highly available” is the highly available system hosted?  How are disasters mitigated and what is the service provider’s disaster recovery/business continuity plan?  How are service provider staff hired and what clearance procedures are employed to ensure that staff aren’t felons that regularly steal identities?  The list of issues is a long one.

The other dilemma for businesses that want to use cloud computing services is that many of these services have a standard form contract that may not be negotiable, or essential parts of it may not be negotiable.  For example, most cloud computing vendors have hired smart attorneys who have drafted a contract that puts all the liability on the customer if something goes wrong, or otherwise limited liability so severely that the business customer will need to buy a considerable amount of business insurance to offset the risks that exist with the cloud, should it ever fail, rain, or just leak into the basement.

On the other hand, businesses that have their own IT departments have the same set of risks.  The difference, I think, is that many businesses do not have liability contracts with their otherwise at-will IT staff.  So, if things go horribly wrong (e.g., think “negligence”), the most that might happen to the IT person responsible is immediate termination (except in cases of intentional property theft or destruction, both of which may lead to criminal but not automatic civil liability for the IT person involved).  How much time does a business have to invest to develop and implement effective system policies, the actual systems themselves, and the staff to maintain those systems?

The advent of more widely adopted EHR systems in the U.S. will likely heat up the debate over whether to use cloud computing services or virtualized desktops that are hosted centrally by a hosting company in order to roll out the functionality of these systems to a broader base of providers (currently estimated at 1 in 5 presently using some EHR).  Companies that can cost less than the Medicare benefit to providers while helping providers comply with the security regulations will likely have the most success in the next few years.  Stay tuned!

Google and Copyright Infringement

Google, back in 2004, began an endeavor to index the contents of an enormous number of books through its search engine, so that google users would be able to full text search books that were otherwise unpublished on the internet.  Under U.S. Copyright law, books that were published before the 1920’s (and certain texts published after that time that did not comply with the renewal requirements and were not saved by the Copyright Act of 1976) are in the public domain and can be freely copied without the need of prior consent or the paying of royalties to the author or his/her estate.  Hence, you can find a copy of Edward Gibbon’s Decline and Fall of the Roman Empire on google’s book search, because Mr. Gibbon originally wrote the manuscript well before the earliest date that the book could be protected by current U.S. Copyright law.  Of course, what got google into trouble was not long dead authors but very alive ones (or ones whose estate or a third party owned a valid copyright to the work), which led to a lawsuit against google in federal court in 2005 by several named plaintiffs and an association, the Author’s Guild, who represents over 8,000 other authors.  The Author’s Guild, et. al. v. Google, Inc., 05 CV 8136 (S.D.N.Y. Sep. 20, 2005).

The complaint in 2005 alleged that google’s indexing of these books without paying a license fee to the individual authors with valid copyrights was copyright infringement writ large, and that the indexing was done in search of advertising revenue (an expressly commercial purpose).  Infringing the valid copyright of another without paying the customary license fee is the sine qua non of an unfair use, and I suspect that were we to see this tested in a court, google would likely have lost the suit.  However, the case didn’t get very far as the parties entered into negotiations to settle the matter.  In 2008, a proposed settlement was announced (see a CNET article here) which would have had google pay the Author’s Guild about $125 million in royalties for google to continue its “exploitation” of works that were probably protected by copyright.

This settlement has not set well with others that are concerned that google’s book collection looks a tad monopolistic (including the Department of Justice, who opened an antitrust investigation according to Reuters).  There is concern in the online community that google may have control of too much information which may ultimately stifle innovation by others.  Monopolizing a market generally violates the Sherman Anti-Trust Act, which can lead the Department of Justice to file suit against the alleged monopolizer.  Such suits have caused large companies like IBM and AT&T to either stop seeming to be monopolies, or to breakup outright into smaller units.  Last year, anti-trust concerns stopped google from establishing a search marketing relationship with yahoo, even though google was probably not trying to control the world of search but just trying to help yahoo fend off a purchase by Microsoft (which ultimately did fail and subsequently led to the ouster of Yahoo’s CEO and a founder Jerry Yang, later in 2008).

Ironically, holders of a valid copyright exercise a legalized monopoly over the thing copyrighted, which, while limited to the duration of the author’s life plus 70 years, is a relatively long time.  For highly valued items, such a monopoly could effectively stifle innovation, at least for those that wish to make derivative works from the copyrighted work but cannot afford to pay the “customary fee” to the copyright holder.  Effectively, the copyright holders represented by the Author’s Guild are one set of monopolists fighting with another alleged monopolist, google, which is probably far larger, but probably not otherwise more or less sympathetic.  On the other hand, the copyright monopoly does have limits built in to the rights granted under the Copyright Act itself, including fair use under section 107, which provides for some academic and non-profit expression by individuals who would otherwise be copyright infringers.  Against google’s alleged monopoly of online information, only a very large sum of money to invest in a competing search engine can offset the market that google now controls in search traffic and search advertisements.  There is no “fair use” exception to google’s alleged monopoly over information that would balance the playing field.

In years past, the anti-trust branch of the Department of Justice may have tried to break up a monopoly and/or have a governmental agency regulate the resulting company(ies).  For example, in Maryland, the Public Service Commission is responsible for watchdogging the utility and phone companies.  Verizon, which operates in several states including Maryland, is a smaller version of AT&T from the 1970s (or perhaps larger given the overall growth in telecommunications in the U.S. over the last thirty years).

The question to be answered is whether Verizon is any more responsive to customers today than AT&T was before the big break up, and whether Verizon is any less stifling of competition and innovation than its predecessor, AT&T.  Answering these questions may help to answer whether google ought, as a matter of policy, to be broken into smaller operating groups and/or regulated by the federal government like an “internet utility” company.

With regards to responsiveness, this is a hard question to answer.  A regulated utility like Verizon is still a very large entity, and as a matter of statistics, Verizon will make a substantial number of errors in service provision and billing that will lead to user complaints.  I don’t have any hard data on complaints over time or resolution rates to compare pre- and post-break up of the entity.  And as to google, I’m not sure that this is much of an issue.  The truth is that there are other search engines in the market today, and it is very easy for an internet user to access these search engines.  They may not have the same content indexed, but all of them use some form of search advertising to help subsidize your ability to freely search on them (or you have to pay a subscription fee to use them).  The state of search today may not really compare with the customer service issues of telecom customers of years past that were stuck working with the Baby Bell to get their phone to work properly.

With regards to the problem of stifling competition, the telecommunications bust at the beginning of this century was in part the result of the Baby Bells like Bell Atlantic/Verizon who controlled the last mile infrastructure that connected competing telecoms to customers.  After a century, the Baby Bells had so much more invested in the public phone and data networks that no small start up could possibly compete.  And whether court-ordered or not, the engineers at Verizon were not going to make a competitor’s service request a higher priority than servicing direct Verizon customers.

So, if the equivalent of this is for google to keep its database of indexed books but simply share access to other search engines, I doubt the outcome would be much different – most people trying to find a book would get a better response from google’s search engine than a competing search engine.  Alternatively, if google were required to publish its search engine algorithms and code, how long and how much money would it take for a competitor to grow to sufficient size to accumulate the scope and depth of data that google now handles every day?  Ten years?  Twenty?  And would the internet be a better place because there are two identical search engines? This is like when there were two different paper phone books.  Other than the additional tree casualties, I don’t think the public was better served with two phone books, and I doubt having two identical databases on the internet of web sites would be much better, either.

What about an Internet Public Service Commission?  For the phone company, the PSC in each state is empowered to receive and investigate complaints from customers – typically about a billing problem, but the PSC investigators examine related issues as well.  My experiences with the PSC here have been positive.  The PSC’s opening of an investigation usually gets my complaint to the right person at Verizon, who is then able to resolve the problem that the customer service representative was either not empowered to resolve or unwilling to resolve.  What, then, would the IPSC be charged with handling from the public about google?  Lost documents in the internet cloud that were stored with google?  Google’s search crawler doesn’t search my site quickly enough?  My web site doesn’t show up in search results high enough based on my keywords?  Security breaches at google?

I suppose another solution would be to make google a national library and attach it to the Library of Congress, which could use the google revenue stream to pay for scanning and indexing everything in the Library of Congress to make it generally available to the public.  Unrelated parts of google could be spun off as private enterprises that would not operate with public money or public regulation (such as the cloud computing aspects of google).

Stay tuned for developments!