EHR & HiTech – More Money for More Health Records

The American Recovery and Reinvestment Act (ARRA) of 2009 has a number of provisions to encourage the expansion and further implementation of electronic health records for the purpose of increasing efficiency in the health care system while hopefully lowering costs.  The  Office of the National Coordinator for Health Information Technology (see website) has established a two-part strategic plan for expanding the installation of health records systems throughout the U.S.

Section 4101 of ARRA descibes the incentive payment process through the Medicare program for eligible professionals that are a “meaningful EHR user,” as that term is defined in subsection (o)(2) of the amended section 1848 of the Social Security Act (cited as 42 U.S.C. 1395w-4).  The tortured definition provided within the statute has three basic requirements: (a) the certified EHR is being used in a “meaningful” manner to the satisfaction of the Secretary, (b) the EHR is connected to some “electronic exchange of health information to improve the quality of health care”, and (c) data is submitted to the Secretary on clinical quality measures on a regular basis.  The data to be submitted to the Secretary is to be governed by contract between Medicare and the provider, and such proposed measures must go through the public notice and comment period in the Federal Register – similar to other proposed regulations under federal law. 

The incentives payable to eligible providers are spread out over five years, with a maximum amount of 18,000 (if starting in 2011 or 2012), otherwise 15,000 that first year.  That is, unless the first year of adoption of the EHR by the provider is after 2014, which in that case the provider is not eligible for any incentive at all through this section.  So, a provider that adopted a certified EHR in 2011, demonstrated that: (a)  he was using it in a meaningful manner, (b) connected to a data exchange, and (c) produced data to the Secretary as required, would be eligible for payments in total of $44,000 over a five year period starting in 2011 (18,000 in 2011, 12,000 in 2012, 8,000 in 2013, 4,000 in 2014, and 2,000 in 2015).

If instead the provider did all the above, but did not start until 2013, the first payment in 2013 would instead be 15,000, for a total of $41,000 over the five years 2013-2017.  Sadly, if the provider did all of the above for the first time in 2015, they would get bupkis.

Green IT: How Virtualization can Save Earth and Your Butt

Technology continues to evolve, providing people with new functionality, features, information, and entertainment.  According to Ray Kurzweil, a number of metrics for computer performance and capacity indicate that our technology is expanding at a linear or exponential rate.  Sadly, the physical manifestations of technology are also helping to destroy the planet and poison our clean water supplies.  According to the EPA, nearly 2% of municipal waste is computer trash.  While an improvement in recent years, only 18% of computers, televisions, and related solid waste is actually recycled by consumers, placing millions of tons of unwanted electronics into landfills each year.  Businesses contribute to this problem each year as they are major consumers of computers, printers, cell phones, and other electronics to operate their business.

Computers that are placed into a landfill pose a significant environmental threat to people and wildlife.  Electronics can contain a number of hazardous materials, such as lead, mercury, cadmium, chromium, and some types of flame retardants, which, in the quantities of disposed equipment, poses a real threat to our drinking water.  See the article here with the details. Lead alone in sufficient quantities can damage your central nervous system and kidneys, and heavy metals in your body will be retained such that over time you accumulate more of the substance until your body reaches a threshold over which you may experience fatal symptoms.  See Lead Poisoning Article. Mercury, cadmium and chromium aren’t any nicer to people or animals.

Everyone should recycle their electronics through a respectable electronics recycler (See Turtle Wings website for example).  However, you can also reduce your server fleet and extend the life of your computer equipment through virtualization.  (See an earlier post on virtualization on this blog).  Virtualization of your server equipment means that you will use fewer physical servers in order to present more virtual machines to your user community for accessing print, authentication, file sharing, applications, web, and other computer services on your network.  Fewer servers in use means that you will have fewer physical server devices to purchase over time and fewer servers to recycle at the end of their life.  Virtualizing your desktops can help by extending the useful life of your desktops (they are just accessing a centrally stored virtual desktop, on which all the processing and storage occurs, so a desktop with little RAM and CPU will work for longer), and also reducing the amount of electricity that your organization uses per computer (if you then switch to a thin client such as a Wyse terminal or HP computing device).

Virtualization can also improve your preparedness for disasters, whether by flood, virus, or terrorist.  For one thing, backing up the data file that represents your virtual servers is easier, can be done during normal business hours, and can be far more easily replicated to another site than the contents of a physical server.  Furthermore, virtualization can reduce the entry costs to implement a disaster recovery site because you can use less overall equipment in order to replicate data from your production environment, so your ongoing operating costs are reduced as compared to a physical server configuration.  Testing upgrades is easier because you can duplicate a production virtual server and test the upgrade before rolling it out to the live system (which costs less than buying another physical server and running a copy of the system on it to run the testing).  Virtualizing desktops also simplifies some of the support and administrative tasks associated with keeping desktops running properly (or fixing them when they stop working right).

So, before you buy another physical desktop or server, think about whether virtualization can help save Earth and you.

Health IT & Open Source

The truth is that I may just be getting annoyed about this debate.  A recent blog posting on Wired (click here for the article) frames the debate over health technology in terms of open source versus legacy or proprietary code, the latter being the enemy to innovation, improved health outcomes, and usability.

First off, an open source program is merely governed by some version of the GPL, which means that other developers can reverse engineer, make derivate works, or otherwise include your open source code in their subsequent open source code.  Developers that freely work together to write something cool are developers writing code.  They aren’t necessarily health experts, physicians, efficiency gurus; in fact, they may not even have health insurance if they live in the U.S. (1 in 6 of us are uninsured).  The fact that code is open source does have a big impact on how U.S. copyright law protects the work, but it doesn’t mean that somehow an open source developer is more in tune with health IT requirements, how to best integrate the system into a physician’s practice, or even necessarily what the actual requirements are for a physician to see a patient and document the visit to avoid liability for fraud or malpractice.  That’s because for developers, requirements come from outside of the development community, from users.

And guess what – proprietary developers of software listen to their user community to understand their requirements.  It’s part of the job of developers, regardless of whether the code is open source or proprietary.  And, for everyone participating in the global economy, the people that pay for your product generally drive the features and functionality in it.  If you can’t deliver, then your user base will go find someone else who can deliver.

Now, for larger health organizations, health records systems are a multi-year investment.  This inherently locks that health organization into a longer term, and more conservative, relationship with their health IT vendor, which tends to reduce the amount of change introduced into a health records system over time – especially for the larger vendors that have a lot of big clients.  The little developer out there writing code at 3am is certainly going to respond to market changes far more quickly than a really big corporation with a health IT platform.  But you know what?  Try getting the little guy to support your 500 desktop installations of his software 24×7.  Do you really think he can afford to staff a help desk support function around the clock for your business?  What happens when he has two customers with emergencies?  Or he wants to get some sleep?  And what about change control?  Even big vendors stumble in testing their code to make sure it works and is secure before releasing it (think Microsoft).  Solo, open source developers, even working in informal teams, are going to miss at least as often as a larger vendor, and introducing a lot more changes just increases the frequency that an untested change becomes an “unpublished feature” aka “blue screen of death.”  Trust me on this one: the health care user base is not going to be very tolerant of that.

Repeatedly, I hear the refrain that this stimulus money is going to go to systems that can be put to a “meaningful use,” and that is going to exclude rogue open source Health IT developers from being funded, squelching innovation in the market place.  I imagine that complying with the security regulations under HIPAA probably hinder innovation, too, but they increase the reliability of the system vendors that remain in the market place and reduce the risk to the data of patients that might be in their computer systems.  Setting minimum standards for health records systems may favor incumbent systems, but honestly – is that so wrong?  Isn’t the trade off here that when someone buys a system that is certified, they can have the satisfaction of knowing that someone else without a vested interest in the product, thought it had certain features or a proven record of delivering certain outcomes?  Perhaps the certifiers aren’t neutral because they come from the industry of EHRs, but if I recall correctly, the people that run the internet have committees with representatives from the internet industry, yet I rarely hear that the standards for the POP3 protocol unfairly burden new or open source developers.

That someone set standards for EHRs like a government agency is a lot like the government setting the requirements for you to receive a driver’s license.  Everyone who drives needs to understand what the red, octogonal sign with the capital letters S T OP means.  On the other hand, you may never parallel park again, but you better learn how to do it if you want your license to drive in Maryland.   Standards are always a mixed bag of useful and not-so-useful rules, but I don’t think there are too many people out there arguing that the government should not set minimum standards for drivers.  A certification requirement for EHRs to establish minimum standards is no different.  Ask the JCAHO people about it.  Ask the HIPAA police.  Ask the IT people you know.  If you are going to develop an EHR, you better secure it, make sure the entries in the database are non-repudiatable, and have a disaster recovery approach.  Don’t know what these things are?  Do your homework before you write a computer system.

Now, another refrain has been that look at how all of these proprietary systems have failed the world of health provisioning.  For example, look at how more kids died at the Children’s Hospital ER in Pittsburg after the hospital implemented an EHR (I can feel a class action lawsuit in federal court).  Who implements EHR’s in ER’s?  So the doctor is standing there and a patient is having a heart attack.  What should the doctor’s first act be?  To register the patient into the EHR and record his vitals?  I would think the doctor should be getting out the paddles and worrying about the patient’s heart beat, but then, I am an attorney and systems guy, not a physician.  Look – dumb decisions to implement a computer system should not lead to subsequent critics blaming the computer system for not meeting the requirements of the installation.  EHR is not appropriate every place patients are seen or for every workflow in a health care provider’s facility.  No knock on the open source people, but I don’t want my ER physician clicking on their software when I am dying in the ER, either.  I don’t want my doctor clicking anything at all – I want her to be saving me.  That’s why I have been delivered to the ER.

Now, VistA is getting a lot of mileage these days as an open source, publicly funded, and successful example of EHR in action.  And it is free.  But in fairness, VistA is not a new piece of software recently written by three college kids in a garage somewhere in between World of Warcraft online gaming sessions.  This program has been in development for years.  And “free” is relative.

For example, if you want support, you need to pay for it.  If you want to run it in a production environment, you will need to buy equipment and probably get expert help.  If you want to implement it, you will need to form a committee, develop a project plan, implement the project intelligently with input from your users, and be prepared to make a lot of changes to fit this system (or any system) into your health facility’s workflows.  And if you find yourself writing anything approaching software, that will cost you something, too, as most health care providers do not have a team of developers available to them to modify any computer system.  So, “free” in this context is relative, and genuinely understates the scope and effort required to get any piece of software to work in your facility.  “Less” may be a more appropriate adjective.  But then, that’s only true if you can avoid costly modifications to the software, and so far, there is no single EHR system that works in every setting, so expect to make modifications.

That’s my rant.  Happy EHR-ing!

The Battle Over Health IT Has Begun

The battle lines on how to spend the money for technology to improve health care are beginning to be drawn.  As a former director of an IT department at a health center which implemented a proprietary health record system in 2003, I can offer a useful perspective on some of the issues.  Phillip Longman’s post on health records technology discusses the issue of using a closed versus an open source health records system, which is part of the larger debate on open source and its impact on application development online.

I’m generally a fan of the open source community.  The shareware people were developing useful applications and offering them to the public ever since I started using a PC as a kid back in the 80’s.  There is a lot to be said for application development that is done in a larger community  where sharing is ok.  For example, my blog is a WordPress blog, which is an open source blogging software which provides a platform not just for writers like me, but also for developers to create cool plugins for WordPress blogs that do all sorts of nice things like integrate with Google Analytics, backup your blog, or modify your blog’s theme, just to name a few that I happen to use regularly (thanks all of you that are linked to).

In 2003, we looked at a number of health records systems, ultimately allowing our user community at the time to choose between the two finalists, both of which were proprietary systems.  One of my observations at the time was that there was a wide array of information systems that were available to health care providers, some of which were written by fellow practitioners, and others that were written by professional developers.  I would be willing to bet that today there are even more health IT systems out in the market place.  We ended up going with a product called Logician, which at the time was owned by MedicaLogic (now a subsidiary of the folks at GEMS IT, a division of General Electric).

Logician (now called Centricity EMR) is a closed source system that runs in Windows, but allows for end users to develop clinical content (the electronic equivalent to the paper forms that providers use to document care delivery) and to share that clinical content with other EMR users through a GE-hosted web site for existing clients of the system.  In addition, Logician has a substantial following to support a national user group, CHUG, and has been around long enough for a small cottage industry of developers to create software to integrate with Logician (such as the folks at Kryptiq, Biscom, and Clinical Content Consultants who subsequently sold their content to GEMS IT for support).

After six years of supporting this system, I can assure you that this technology has its issues.  That’s true, of course, of all most information systems, and I would not suggest that the open source community eclectic collection of developers is necessarily any less buggy or any easier to support.  And, in fact, I don’t have any opinion at all as to whether health records would be better off in an open source or proprietary health record system.  Health professionals are very capable of independently evaluating the variety of information systems and choosing a system that will help them do their jobs.  One of the big reasons that these projects tend to fail is a lack of planning and investment in the implementation of the system before the thing gets installed.  This process, which, when done right, engages the user community in the project to guide it to a successful go live, is probably more important and actually takes more effort than the information system itself.

Mr. Longman criticizes the development model of “software engineers using locked, proprietary code” because this model lacks sufficient input from the medical users that ultimately must use the system in their practices.  I suppose there must be some health records systems out there that were developed without health provider input, but I seriously doubt they are used by all that many practices.  I do agree with Mr. Longman that there are plenty of instances where practices tried to implement a health records system and ended up going back to paper.  We met several of these failed projects in our evaluation process.  But I would not conflate proprietary systems with the failure to implement; proprietary systems that actually include health providers in their development process can be successfully implemented.  Open source can work, too.  As Mr. Longman points out, the VA Hospital system has been using an open source system now called VistA which works for the VA hospital system’s closed delivery system (patients at the VA generally get all of their care at a VA institution and rarely go outside for health care).

My point is that the labels “open source” and “proprietary” alone are not enough to predict the success or failure of a health records system project.  Even a relatively inexpensive, proprietary, and functionally-focused system that is well implemented can improve the health of the patients served by it.  There is a very real danger that the Obama administration’s push for health IT will be a boondoggle given the scope and breadth of the vision of health IT in our country.  But the health industry itself is an enormous place with a wide variety of professionals, and the health IT market place reflects this in the varied information systems (both open source and proprietary) available today.  I would not expect there to be any one computer system that will work for every health care provider, regardless of who actually writes the code.

How Virtualization Can Help Your DR Plan

Virtualizing your servers can help you to improve your readiness to respond to disasters, such as fires, floods, virus attacks, power outages, and the like.  Popular solutions, such as VMWare’s ESX virtualization products, in combination with data replication to a remote facility, or backups using a third party application like vRanger can help speed up your ability to respond to emergencies, or even have fewer emergencies that require IT staff to intervene.  This article will discuss a few solutions to help you improve your disaster recovery readiness.

Planning

Being able to respond to an emergency or a disaster requires planning before the emergency arises.  Planning involves the following: (1) having an up-to-date system design map that explains the major systems in use, their criticality to the organization, and their system requirements; (2) having a policy that identifies what the organization’s expectations are with system uptime, the technical solutions in place to help mitigate risks, and the roles that staff within the organization will play during an emergency; and (3) conducting a risk assessment that reviews the risks, mitigations in place, and unmitigated risks that could cause an outage or disaster.

Once you have a system inventory, policy and risk assessment, you will need to identify user expectations for recovering from a system failure, which will provide a starting point for analyzing how far your systems are from user expectations for recovery.  For example, if you use digital tape to perform system backups once weekly, but interviews with users indicate an expectation that data from a particular system can’t be recovered manually if a loss of more than a few hours is experienced, your gap analysis would indicate that your current mitigation is not sufficient.

Now, gentle reader, not all user expectations are reasonable.  If you operate a database with many thousands of transactions worth substantial amounts in revenue every minute, but your DR budget is relatively small (or non-existent), users get what they pay for.  Systems, like all things, will fail from time to time, no matter the quality of the IT staff or the computer systems themselves.  There is truthfully no excuse for not planning for system failures to be able to respond appropriately – but then, I continue to meet people who are not prepared, so…

However, user expectations are helpful to know, because you can use them to gauge how much focus should be placed on recovering from a system failure, and where there are gaps in readiness, seeking to expand your budget or resources to help improve readiness as much as feasible.  Virtualization can help.

Technology

First, virtualization generally can help to reduce your server hardware budget, as you can run more virtual servers on less physical hardware – especially those Windows servers that don’t really do that much (CPU and memory) most of the time.  This, in turn can free up more resources to put towards a DR budget.

Second, virtualization (in combination with a replication technology, either on a storage area network, such as Lefthand, or through another software solution, for example, Doubletake) can help you to make efficient copies of your data to a remote system, which can be used to bring a DR virtual server up to operate as your production system until the emergency is resolved.

Third, virtual servers can be more easily backed up to disk using software solutions like vRanger Pro, which can in turn be backed up to tape or somewhere else entirely.

Virtualization does make recovery easier, but not pain-free.  There is still some work required to make this kind of solution work properly, including training, practice, and testing.  And you will likely need some expertise to help implement a solution (whether you work with a VMWare, Microsoft, or other vendor for virtualization).  On the other hand, not doing this means that you are left to “hope” you can recover when a system failure occurs.  Not much of a plan.

Testing and Practice

Once the technology is in place to help recover from a system failure, the most important thing you can do is to practice with this technology and the policy/procedure you have developed to make sure that (a) multiple IT staff can successfully perform a recovery, (b) that you have worked out the bugs in the plan and identified specific technical issues that can be worked on to improve the plan, and (c) that those who will participate in the recovery effort can work effectively under the added stress of performing a recovery with every user hollering “are you done yet?!?”.

Some of the testing should be purely technical: backing a system up and being able to bring it up on a different piece of equipment, and then verifying that the backup copy works like the production system.  And some of the testing is discussion-driven: table-top exercises (as discussed on my law web site in more detail here) help staff to discuss scenarios and possible issues.

All of the testing results help to refine your policy, and also give you a realistic view of how effectively you can recover a system from a major failure or disaster.  Some systems (like NT 4.0 based systems) will not be recoverable, no matter what you may do.  Upgrading to a recent version of Windows, or to some other platform all together, is the best mitigation.  In other cases, virtualization won’t be feasible because of current budget constraints, technical expertise, or incompatibility (not all current Windows systems can be virtualized because the system has unique hardware requirements, or otherwise won’t covert to a virtual system).  But, there are a fair number of cases where virtualizing will help improve recoverability.

Summary

Virtualization can help your organization recover from disasters when the technology is implemented within a plan that is well-designed and well-tested.  Feedback?  Post a comment.

iPhone 3.0 Software Available

For iPhone users, you now can download version 3 of the operating system for your phone, but you may be waiting a while for the iTunes store to let you get the update.  The update itself is about 258 mb, which will cause some waiting all by itself, but because there are so many other people downloading this update, you will be waiting for pretty much anything else that you need from the store (like updating apps on your iPhone for example).

However, the update of my 3G phone went just fine this morning, so hopefully you iPhone users out there will have a similar experience.  Click here for Apple’s official info on what’s new in the latest update.

June 22 Update.

So there are a number of noticeable improvements with the iPhone OS version 3.0.  For one, there is an overall search function for the iPhone, which now lives on the first page of applications of the phone.  This search goes across a variety of entities in the phone, such as your contacts, music, email, installed apps, and calendar, so you can find stuff faster.

Your call log now shows you which phone entry you called, and if not in your contacts, the phone will tell you the city and state of the phone number in the log.  This saves you a tap to find this info.

There is now also a search function built into your email.  This is particularly helpful given that you probably have a fair amount more messages on the mail server to which you are pushing via IMAP.  For example, my gmail account today has about 3,000 messages stored in my mailbox, so being able to search through this to find a message is a big help.  You can also turn your phone sideways to look at your messages in landscape mode, which is helpful when you are looking at attachments or just reading your messages.

I also noted that the stock quote app that comes with the phone now has more information on each stock that you might be following, including high/low, P/E, market cap., and news.  And you can look at the stock price graphs for each stock in landscape mode by just turning your phone ninety degrees.

There are a number of other nice add-ons to the phone that you can investigate further on Apple’s site.  Happy i-Phone-ing!

Updates on Online Marketing Efforts

A few months ago, I wrote some articles here about efforts to market my law practice online through Yahoo and Google search advertising.  Here are some preliminary results on the efforts and plans to improve my marketing efforts.

More Content = More Visitors

One of the essential rules of web site design is that the more content on your site, the more chances you have of being indexed against keywords that internet users may be using to search.  Therefore, you increase the number of people that visit your site when you have more indexed content, which increases the chances of picking up a new client through the site or impressing one of your colleagues with your knowledge such that they refer you a prospective customer or two.

Of course, adding content is a time-intensive kind of thing, even if you write blog articles day and night.  One of the good things about google and yahoo is that the search engines will come back and visit your site for new content on a regular basis (google checks my site more than once per week, and you can pay for priority indexing by yahoo so they visit your site at least weekly for content changes and indexing).

Pushing Out Newsletters

Pushing out newsletters to subscribers, with links to return users to your site that receive and wish to read the newsletter’s full content also helps to drive users to your site (and adds content directly to your site for the hungry search engines).  Over time, I definitely see spikes in activity on my site around the times that I send out newsletters via email to subscribers.

Watch What Keywords Bring Users to Your Site

Google’s Analytics keeps track (for the web site pages that you embed the javascript needed to collect data on site visitors) of the keywords that bring users to your web site.  These keyword statistics help to determine what brought a user to the site, and may lead you to change your site content to either encourage or discourage the kinds of visitors that are reaching your site.  For example, I had written a newsletter about the new Massachusetts law that is aimed at protecting consumer data collected by businesses in that state.  (See 201 CMR § 17.00).  A number of users have found my site because of this newsletter, particularly in their searches by citation to the statute itself.

On the other hand, some users looking for a bankruptcy or divorce attorney have also landed on my web site, which suggests that my site has been indexed under overly general keywords related to law, or the advertisements that I have running on yahoo or google (if the source of some of these visitors) are not specific enough in their focus (again, based on their keywords).  For example, my web site does show up in the third page of search results when searching with google for “disaster recovery table top exercise,” but I would be happier if my site was closer to the first page of results.

Related to this are the web sites that refer visitors to the web site.  Interestingly, facebook is my top referring web site, followed by linkedin (where I have a professional profile), and then some web sites that I don’t recognize but apparently have indexed my site into their search results for one reason or another.  Same question of whether to encourage or discourage such links based on the content of your own site.

Twitter and Tweeting

So far, I haven’t done much tweeting out in the world of twitter.  I guess as an attorney, 140 characters is just too restrictive.  Perhaps if I start writing haikus, twitter would be the place to publish them!  I see that Iran’s election results are being tracked by twitterers in Iran, so maybe if I was at a live event like Apple’s annual trade show or another large meeting, I’d be more prone to twitter away (which I can do from my iPhone if I were so inclined).  WordPress does support integration with twitter via a plugin, so if you want to be able to put your tweets into a digest form and load them automatically to your blog, you can.  Time will tell if twitter ends up being useful to market a law firm.

Most Importantly…

Keep working at it and don’t be afraid to try new things.  Google Analytics (or a similar web tracking software package that you can use on your web site logs) will help you to figure out why people come to your site and what they spend time looking at on it.  And be patient – online marketing is a fair amount like fishing.  Some days you come home with nothing, and other days, you find a place pre-stocked with your favorite fish and you come home fat and happy!

Virtualization Primer

The dictionary definition of “virtual” is being in essence but not in fact.  That’s kind of an interesting state when you are talking about computers.  The reason is that for most folks, a web server or some other mysterious gadget that makes their email work is probably “virtual” to them already because most users don’t see the server itself in action, only its result (a web page served up or an email delivered to their inbox).  Virtualization is one of those “deep IT” concepts that the average person probably doesn’t pay much attention to.  But here is conceptually what’s going on.

Back in the bad old days of computing, if you wanted to have a Microsoft Windows Server installed and available for use, you would go out and buy yourself a physical piece of computing equipment, complete with hard drives, RAM, video card, motherboard, and the rest, and install the Windows operating system to it.  If you needed another server (for perhaps another application like email or to share files), you would go out and buy a new piece of hardware and another license for Windows Server, and away you would go.  This kind of 1 : 1 ratio of hardware to server installation was fine until you had more than a few servers installed on your network at home and the AC couldn’t keep up with the heat output of your computer equipment.

So a bunch of very smart people sat down and asked how this could be handled better.  I’m sure someone in the room said just buy much more expensive hardware and run more applications on the same physical server.  This is in fact the model of larger businesses back in the worse old days of mainframe computing (oh yeah, people still use mainframes today, they just keep them in the closet and don’t advertise this to their cool friends who have iPhones and play World of Warcraft online).

But the virtualization engineers weren’t satisfied with this solution.  First off, what happens when the hardware that everything is running on, fails?  All of your eggs, being in one basket, are now toast until you fix the problem, or bring up a copy of everything on another piece of equipment.  Second, what happens when one of those pesky applications decides to have a memory leak and squeezes everybody else off the system?  Same as number one above, though the fix is probably quicker because you can just bounce that ancient mainframe system (if you can find the monk in the middle ages monastery that actually knows where the power button is on the thing, that is).  Third, mainframes are really pretty expensive, so not just any business is going to go and buy one, which means that a fair amount of the market for server equipment has been bypassed by the mainframe concept.  And finally, mainframes aren’t cool anymore.  No one wants to buy something new that isn’t also cool.  Oh wait, I doubt the engineers that were sitting in the room having a brainstorming session would have invited the marketing department in for input this early on.  But it is true – mainframes aren’t cool.

So, this room of very smart people came up with virtualization.  Basically, a single piece of computing hardware (a “host” in the lingo) can be used to house multiple, virtual instances (“virtual machines”) of complete Windows Server installations (and other operating systems, though Windows virtualization is probably driving the market today).  On top of that, they came up with a way for these virtual machines to move between physical servers without rebooting the virtual machines or even causing much of an impact on performance to the users.  Housing multiple complete virtual machines on a single host works because most Windows machines sit around waiting for something to happen pretty much all day – I mean, even with Microsoft Windows, how much does a file server really have to think about the files that it makes available on shares?  How much does a domain controller have to think in order to check if someone’s username and password are valid on the domain?  Even in relatively large systems environments, there are a considerable number of physical servers that just aren’t doing all that much most of the time.

Virtualization provides a way to share physical CPU and memory across multiple virtual machines, so you can get more utility out of each dollar you have to spend on physical server equipment.  Some organizations are therefore able to buy fewer physical servers each year.  Sorry Dell and HP – didn’t mean to rain on your bottom line, but most IT departments are trying to stretch their capital budgets further because of the recession.  Fewer servers also means less HVAC and power, both of which have increased in cost as energy markets have been deregulated and prices have started to more closely follow demand.  I guess BG&E and Pepco are also sad, but look, some of your residential customers still set the AC at 65 degrees, so just charge them three times as much and everyone is happier!

Most of the leading vendors also offer “High Availability,” which means that virtual machines can automatically be moved between hosts, and if a host fails, the supervising software can restart those virtual machines on an available host in your cluster of hosts.  For those IT people carrying blackberries that have to go to the server room at 3am to reboot physical equipment, welcome to the 21st century.

In addition, at least VMWare offers a way for virtual machines to automatically move between hosts when a particular host gets too many requests for CPU or RAM from the virtual machines running on it.  This functionality helps to improve overall performance which makes all the users happy, and quiets the help desk (a little bit).  Ok, so the users call you about something else, so the help desk is still not any less quiet, but at least you can cross one complaint off the list for the moment.

In sum, virtualization is a smart and efficient way to implement servers today.  I imagine if you work in IT that you are very likely to come into contact with virtualization soon if you have not already.  We converted about two years ago and we aren’t looking back!

Obama & Health Care IT

President Obama’s plan (published here) (the “Plan”) describes a multi-part approach to expanding the amount of health insurance available to those without insurance while attempting to reduce the costs of providing health care to Americans.  A portion of this plan involves the expansion of health information technology to help reduce the costs of administering health care.  On page 9 of the Plan, paper medical records are identified as a health care expense which can be reduced through records computerization.  The Plan cites a study by the RAND group (published here) (“RAND”) that indicates that the processing of paper claims costs twice as much as processing electronic claims.

Estimated Savings and Quality Improvements by Adoption of Health IT

The RAND group suggests that fully implemented health IT would save the nation approximately $42 billion annually, and would cost the nation’s health care system approximately $7.6 billion to implement.  RAND at 3.  According to their review of the literature on health IT adoption, approximately 20% of providers in 2005 had adopted an information system (which may have several meanings from patient reminder systems to clinical decision support).  RAND at 20-21.  Full implementation of health IT would require a substantial number of providers to convert to regular use in order for the total savings identified by RAND to be realized.  RAND estimated that in 2005 there were approximately 442,000 providers in the U.S.; this suggests that about 353,000 providers would need to convert from paper to electronic systems before the full savings to the health system would be realized.  RAND at 20.

Areas of savings in the outpatient setting noted include: transcription, chart pulls, laboratory tests, drug utilization, and radiology.  RAND at 21.  Areas of savings in the inpatient setting noted include: reduction of unproductive nursing time, laboratory testing, drug utilization, chart pulls and paper chart maintenance, and reduction of length of stay in the hospital.  RAND at 36.  Savings on the inpatient side account for approximately 2/3rds of the total savings, and the largest area of annual savings is tied to the reduction in the length of stay of patients as a result of the adoption of health IT.  Id. This overall cost savings is based on adoption of health records by virtually all health care providers in a 15 year period; the total savings to the health system during that time would total about $627 billion.  Id.

The Plan also discusses increasing the quality of health care delivered to all patients through the implementation of disease management programs (which are driven by health data of individual patients to monitor progress and outcomes), and the “realignment” of provider reimbursement with quality outcomes.  Plan at 7.  Realignment typically occurs when health insurance plans pay not for the total visits billed by a provider, but based on some kind of quality measure that tracks how well patients are doing in managing their health condition.  This is also driven by the availability of reliable health outcomes data (for example, the hemoglobin a1c test results of patients with diabetes over time, and the percentage that report a result under the “normal” or expected value).

The Trouble with Adoption of Health IT

Adopting health IT systems, however, is no small feat.  Systems have been available to the health care infrastructure for a substantial period of time (Centricity, a health information system now owned by General Electric, was originally developed by Medicalogic in the mid-80’s and became popular in the 1990s).  See Article.  In 2000, Medicalogic had penetrated the practices of about 12,000 physicians in the U.S., or around 3% of the total market, and was described then as the market leader in electronic medical records (which perhaps a total of 10% of the market had adopted a system by that time).  Using RAND’s analysis, five years passed and 20% of physicians had adopted some form of health IT.

If market penetration is to double every five years, by 2010, 40% of physicians should be using a health IT system, and by 2015, 80% should have adopted such a system.  (Admittedly, this assertion is weak because there is not sufficient data in this article to support this assertion.  In addition, adoption rates tend to follow a parabolic rather than a linear pattern, so that larger numbers of adopters join the crowd as time progresses.  But, dear reader, please feel free to comment with specifics to help improve the quality of this article!)

The New England Journal of Medicine, with a likely more restrictive definition of health IT, found that less than 13% had adopted such a system as of 2008, based on their sample of 2,758 physicians.  Article here.  An article in the Journal of Evaluation of Clinical practice reported that about 18% of the practices it surveyed (847 in total) had an electronic health record in use in 2008.  Article here.  (“JECP”)  As RAND had pointed out in its own literature search conducted in 2005, the definitions of health IT vary widely across the empirical surveys conducted, so an accurate estimate of market penetration is hard to come by.  However, it does appear that the number of practices that have adopted general health IT is not significantly higher than in 2005.

An interesting article suggested that some of the problem with health IT adoption may be regional – that some regions of the country tend to have a slower adoption rate of technology in general, which would tend to slow down the adoption of health IT in those areas.  Article here.  The JECP survey also indicated that specialty practices and smaller practices tend to be slower to adopt health IT as compared to their primary care provider counterparts.  Access to adequate capital to fund health IT purchases is an obvious reason for not implementing such systems.  Id. I would also posit that the adoption of health IT does not generally distinguish health care providers in the market of health care delivery (physicians don’t advertise that they have a health record system).  It would be interesting if patients could receive information on average health outcomes by physician when researching who they want to use for medical services (only possible if health IT is widely adopted and there is general consent to the publication of such data, which today is putting the cart before the horse).

There is, therefore, a market failure in that, if we accept that health IT reduces medical costs or improves outcomes over time, the market has not made a concerted effort to adopt this technology.  The Plan puts forward capital to help implement records and has an incentives component that rewards improved health outcomes.  Time will tell if these investments and market changes will actually reduce health care costs in the U.S.

Maryland Health IT

The governor is poised to sign a Maryland bill into law this week on health information systems.  Click here to see the Baltimore Sun article.  The bill was known as HB706 on Electronic Health Records – Regulation and Reimbursement, was adopted by the House and State Senate by April 10, 2009.  The Act empowers the Maryland Health Care Commission to establish a health information exchange in the State, and to develop regulations that incentivize providers that use an electronic health record.  The incentives are required to have monetary value, and may include such things as increased reimbursement for specific services, lump sum payments, gain-sharing arrangements, and rewards for quality.  Click here for the full text of the enrolled House Bill. By 2015, the bill also anticipates that health care providers will be using an electronic health record that is nationally certified and capable of sharing information with the State’s health information exchange.

I think this is an interesting development for health care providers.  Most estimates that I have seen show that a majority of health care providers do not use computerized records for keeping track of patient care.  The records systems are relatively expensive (at least in the short term for software licensing, equipment, and the like), most providers are not techno-geeks, there are serious technical security issues that have to be managed (such as those promulgated under the HIPAA security regulations), and in the short term providers are not likely to become more productive or see a substantial increase in revenue from the investment.  This bill, once signed into law, will start to pile on incentives for providers to move to an EHR, particularly if a provider can begin to demonstrate improvements in the quality of care delivered to patients.

The bill also presumes that there is value in a standard health information exchange, which I imagine would act as a clearinghouse for authorized users to access or analyze health information on Maryland residents.  Beyond the obvious privacy concerns (employers firing or not hiring employees with expensive health conditions), having such sensitive information in one place will require a substantial investment in protecting that data from unauthorized access or theft.  I think it is interesting that the State is not just letting the free market address the need for these kinds of information exchanges – Google and Microsoft have both put products in the marketplace for individual consumers to manage health information.  Arguably, such a large scale effort to place all Maryland patient information into a single repository may exceed the resources of private investors or companies expecting to profit from the repository’s development.  The open question is what value such a repository will actually have to individual patients whose data is “on deposit.”  Stay tuned!