An Unscientific Survey on Legal Tech – Results

In preparation for speaking at the Small and Solo Conference on November 12, I solicited feedback from fellow MSBA solo attorneys via an online survey on technology.  My survey was designed to get at some basic questions on what solos use in their practice here in Maryland.  I received 32 responses from fellow attorneys, and I share some of the results here in comparison to a larger survey conducted last year by the American Bar Association.

I asked a series of multiple choice questions of respondents, including “what kind of technology do you use in your practice?” and “what online marketing resources do you use in your practice?”  I also asked respondents to categorize their practice, and how long they had been in practice.  The ABA’s survey asked respondents nationally whether the respondent used a smartphone in their practice (such as a blackberry or iPhone), whether the respondent used social media for their practice, whether the attorney used a Windows PC or a Mac to practice, and what kind of web-based research and practice management tools the attorney used.

There are some interesting differences between the ABA survey and my informal survey.  First, only 12% of ABA respondents indicated that they used some form of social media in their law practice.  However, Maryland respondents indicated a considerably higher utilization rate (42% used an online law directory listing service like avvo.com, 39% used LinkedIn, and 23% used Facebook).  Second, only 4% of ABA respondents indicated that they used a Mac to practice law.  However, Maryland respondents indicated 6% used a Mac and another 13% indicated they used both a Mac and a PC to practice, suggesting that Macs have enjoyed a greater market penetration with attorneys who may have a legacy PC practice management system that they now operate in a virtual environment on their Mac.  Third, only 28% of ABA respondents indicated that they regularly used some kind of practice management system, whereas 70% of Maryland respondents indicated that they had and used one in their practice.  (Also notably, when asked what kind of practice management system, there was considerable diversity in the vendors named by responding attorneys).

Solos (at least here in Maryland), appear to be above average in their use of technology in their practices.  Perhaps this is by necessity in order to reduce overhead costs.  63% of survey respondents indicated that they were proficient with technology, but less than 19% indicated they felt they were experts.  Comments?

Some Technology Resources for Attorneys

I am scheduled to speak for the MSBA tomorrow at 10:45 at their annual Solo and Small Firm Conference, and will be talking about legal tech for attorneys.  As a part of that presentation, I have prepared a list of some additional resources for attorneys to help plan for their technology needs, particularly for those considering starting out as a solo practitioner.  Here are some additional resources from that list:

  • Nelson, et al., “The 2010 Solo and Small Firm Legal Technology Guide” (note that the 2009 version of this book is available on Google Books for free)
  • Siskind, et al., “The Lawyer’s Guide to Marketing on the Internet”
  • Susskind, “The End of Lawyers?  Rethinking the Nature of Legal Services”
  • Elefant, et al., “Social Media for Lawyers: The Next Frontier”

Here are some common web pages that I use in my practice:

And here are some additional web applications that may be helpful for attorneys:

The Small and Solo Conference is a great opportunity to learn more about Social Media, Technology for Practice, Legal Ethics, and a host of other timely and useful legal topics.  I hope you will join us tomorrow and Saturday!

iPad and us Attorneys

The iPad came out in April this year, but I hesitated to get one because it is a bit pricey, and I wasn’t sure if I would have a use for it, given that I already have an iPhone and a MacBook Air. I decided that I would give it a try this weekend. I like it. With regards to reading things, such as maps, books, web pages, emails, and other content, the iPad works very well. In combo with iDisk (part of MobileMe subscription), you can also share work from the office to read at home. Not that I want to take work home with me, but this is a simple way to read something when it is quieter at night, and then come in the next day to write that brief or memo.

I decided to try and type this article on the iPad with the WordPress app, and it is ok. You can type with two hands, unlike the iPhone, so this is quicker. But putting in links and other references is not intuitive, (as others have commented here http://blogs.techrepublic.com.com/hiner/?p=5941&tag=nl.e101.b). However, this device might be easier to use in court for reference materials and the like, particularly if the court has wifi (I opted not to get a 3G model iPad). This device is also notably faster then the iPhone. The battery life of the iPad is also quite good compared to my laptop and phone.

Overall, I think this is a nice device to own and it will improve as innovators come up with new apps designed for it. However, I don’t think this device is ready to replace your laptop yet! Check back for more from the field.

Autodesk and the First Sale Doctrine

Autodesk, Inc. and Timothy Vernor have gotten into a dispute over Mr. Vernor’s resale of Autodesk’s AutoCAD software on eBay.  Autodesk kept filing DMCA take down notices for each of Mr. Vernor’s auctions of AutoCAD software that Mr. Vernor had started on eBay.  After this happened a few times, Mr. Vernor hired a lawyer and sued Autodesk under the Declaratory Judgment Act, seeking a declaration of rights from a federal court within the 9th Circuit that Mr. Vernor had the right to resell Autodesk’s software.

Mr. Vernor won at the trial level.  A copy of the opinion is found at Vernor v. Autodesk, Inc., 555 F.Supp. 2d 1164 (2008).  At the heart of Mr. Vernor’s argument is the protections afforded by the Copyright Act under section 109, known as the “first sale doctrine.”  That section states: “Notwithstanding the provisions of section 106(3), the owner of a particular copy or phonorecord lawfully made under this title, or any person authorized by such owner, is entitled, without the authority of the copyright owner, to sell or otherwise dispose of the possession of that copy or phonorecord.”  17 U.S.C. § 109(a).  Mr. Vernor argued that his purchase at yard sales of copies of the AutoCAD software could have only occurred if AutoCAD had already sold copies of its software to another party prior to Mr. Vernor’s purchases.  Therefore, the first sale doctrine would immunize Mr. Vernor from further liability under the Copyright Act.

Autodesk, on the other hand, argued that effectively it had never sold a copy of its software to anyone, because any sale of its software is subject to a licensing agreement that specifically forbids transfer of the software, and the software in Mr. Vernor’s possession was not sold but was transferred to the prior holder via a settlement agreement between the prior entity and Autodesk.  Furthermore, the software itself is only offered for sale via a restrictive license, making the subsequent holder of the copy of the software a licensee.  As a result, section 109 provides such a person, such as Mr. Vernor, any defense.

After the trial court entered judgment for Mr. Vernor, Autodesk appealed.  The Ninth Circuit reversed the trial court.  A copy of their opinion is here: 09-35969, and is found at Vernor v. Autodesk, Inc., No. 09-35969 (9th Cir. Sep. 10, 2010).  The Ninth Circuit established a three part test for determining if the subsequent holder of a copy of software owns the software or is merely a licensee: “We hold today that a software user is a licensee rather than an owner of a copy where the copyright owner (1) specifies that the user is granted a license; (2) significantly restricts the user’s ability to transfer the software; and (3) imposes notable use restrictions.”

For fun, I downloaded a copy of the End User License Agreement that Microsoft licenses its Office Suite, which you can read here: clientallup_eula_english.  I know that you will be surprised to discover that Microsoft licenses but does not sell its software to end users.  Section 7 of this agreement provides a whole host of restrictions on use and resale of its software.  So I checked on ebay to see if anyone would sell me a copy of Microsoft Office, and this morning I found 9,623 offers.  Searching for Autocad turned up over 2,400 copies for sale. Apparently many people who possess copies of software don’t pay much attention to the license agreement that makes them licensees rather than owners, and that now makes them into copyright infringers when they started offering these software packages for sale on sites like eBay.

The licensing terms for the Microsoft EULA do suggest that “use of the software” constitutes acceptance of the agreement.  Mr. Vernor indicated that he never used the copies of AutoCAD, and therefore he wasn’t bound by the agreement with AutoCAD, but this was not dispositive for the Ninth Circuit, as he bought the software from a prior holder that could not be called an “owner” based on the agreement between that entity, CTA, and Autodesk.  I’d expect this ruling from the Ninth Circuit to cause some trouble for licensees, many of whom have probably never thought when they bought that shrink-wrapped CD that they could not re-sell it later, given how common limited licensing agreements are in the world of proprietary software today.  Open Source, here we come!

Index Tuning to Improve Database Performance

Many modern systems today rely on databases, and databases rely on performance tuning to ensure that these systems work properly.  This article investigates some basic “rules of thumb” for properly tuning databases stored on Microsoft SQL Server.  There are two basic tools that ship with SQL Server that can be used to tune performance: the SQL Server Profiler and the SQL Query Analyzer.  The Profiler allows you to monitor a SQL Server for the queries that the server handles, and to collect performance metrics on particular queries.  Longer running queries will require more CPU, read, and write time, and will generally have a longer than average duration.  The Profiler will also provide you with the specific syntax executed against the server.

The SQL Query Analyzer allows you to examine individual queries.  The Analyzer provides you with a way to examine the execution plan of the server of the query that you are examining.  The execution plan is the process that the SQL Server itself will engage in order to return results for a particular query.  The server must take data from individual tables or indexes, combine these objects based on matching fields, and then collate the final query results for display to the requester.

The execution plan is read from right to left.  At the right most will be the starting indexes or tables that SQL Server will use in order to return results.  Some general rules of thumb:

1. The object or step that consumes a large percentage of the total processing time of the query should be examined first.

2. If a starting object is a table, analyze whether you an index can be declared that will “cover the query”[1] and will also be a feasible[2] index to create.

With regards to feasibility, there are two basic considerations.  First, are the fields that you plan to add into the index ones that will actually help improve performance if the index is used as compared to the underlying table?  Second, if the table is particularly large, are there already enough indexes being stored that the space for the new index is not worth the performance improvement likely to be gained from its creation?

As to the first issue, generally, text and nvarchar fields are poor choices to be included in an index (or any large byte field in a table).  The reason for this is that the index will store a copy of these values separate from the table, hopefully in an index that is smaller, in total bytes and pages of data, than the underlying table itself.  However, if one field in the table takes up a large portion of the total bytes per row, and you declare an index with that same field, the overall performance of scanning the resulting index and the underlying table will be similar, and there will be minimal performance improvements to the query you are executing.

As to the second issue, the storage space of a database is an important consideration.  The more indexes and the large the table indexed, the more space will be taken up by a new index.  This may cause the benefit of the new index to performance to be outweighed by the additional storage space, backup time, and other costs associated with the index.  For example, if a particularly large table requires a new index for a query that is run once per month, the incremental cost for storing the new index may not be worth the benefit of a query that is only marginally improved and is infrequently executed.

3. Examine the fields that are being used to join the tables together in the query.  Generally, the primary keys that join tables together are a part of a clustered index on the table; however, there may not be an index that includes a foreign key in a table.  As a result, the query performance may be degraded.

4. Examine the fields being selected in the query.  Are there fields that can be taken out of the select statement?  Are there any indexes that can cover the fields being selected?

5. When tables are being put together to get the results of the query, is the join process in the estimated execution plan a “merge-join,” “hash match” or “nested loop.”  Merge-join’s run the quickest when the tables/indexes so joined are joined and organized on the same field that is used to complete the join.  Hash match is the most common joining algorithm, and is a two step process where a hash table is created of the smaller table, and a probe is used from the smaller table to find matching records in the larger table.  Nested loops are usually the least efficient joins, as they require one table to run as the outer table, which is scanned row by row to see if each row matches any of the rows in the inner table.  If SQL is using a nested loop for two large tables, your query will be in trouble.  Sometimes, you can create an index on the joining tables that will help SQL Server to do a better job executing the query, especially if one of these joins is a bottleneck to performance.

6. If an index is being used but is taking up substantial processing time, you should evaluate whether the index has been properly maintained.  Indexes that are fragmented will have poor performance over time and require regular maintenance in order to perform optimally.

7. Can a view be declared to cover the query that is being used?  Views, being precompiled, will sometimes marginally improve the performance of a query that relies on the view, rather than one that runs against the underlying tables directly.

When conducting index tuning, it is important to the tuning process to have a larger picture of the indexes that exist presently in the database, and also a working understanding of the common joins between the tables in the database.


[1] Covering the query means creating an index that will have the fields in it that are being selected as a part of the tested query.  For example, if the query itself asks for pid, did, and sdid from the document table, an index that has pid, did, and sdid will cover the query.

[2] Feasible indexes are usually based on fields that are integers or other, small byte size fields, that do not substantially overlap with an existing index, and that will not consume a large amount of space within the database itself.  New indexes can increase the total amount of space required by the database by 10-20% each; this may limit the total number of indexes that can, cost effectively, be created for a particular table or database.  In addition, indexes that contain large byte fields, such as varchar or long nvarchar fields, will be unlikely to perform better than the underlying table.

Disaster Recovery Planning

I had the pleasure recently to present to a group of IT and business leaders on the topic of disaster recovery.  Based on some of the questions and feedback from the group, I thought I would add some comments on this topic on the blog.

First, a fair number of attendees commented that they were having a hard time explaining the need for disaster recovery, or obtaining the necessary resources (either staff time, money, or both) to implement a solution.  Of the attendees, only a handful reported they had completed the implementation of a disaster recovery solution.  I think these are common problems for many organizations that are otherwise properly focused on meeting the computing needs of their user community.  Disasters generally happen infrequently enough that they do not remain a focus of senior management.  Instead, most businesses focus on servicing their customer base and generating revenue, and addressing the day to day issues that get in the way of these things.

Second, one of the attendees properly emphasized that IT staff are an important part of the planning equation.  Without qualified and available staff, a disaster recovery system will not produce the desired outcome – a timely and successful recovery, no matter how expensive the system itself costs.

Third, at least one attendee indicated that they had implemented a solution with a service provider, but the solution was incomplete for the organization’s recovery needs.  This is also a common problem for organizations that have significant changes in their systems over time, but disaster recovery is not included in the new system acquisition process.

Disaster recovery as a concept should not be introduced as an IT project, in spite of the fact that there are important IT components to any disaster recovery plan.  Instead, disaster recovery is a mindset.  It should appear on the checklist of items to consider for organizational decisions, along with other considerations like “how will this project generate revenue?” and “how will this project impact our commitment to protecting customer data?”

Disaster recovery solutions are more than just another virtual server or service.  Disaster recovery is another insurance policy against the uncertainty of life.  Organizations routinely purchase liability insurance, acts and omissions insurance, and other insurance policies on the basis that unanticipated negative events will inevitably occur.  System failures, computer viruses, and other environmental failures are inevitable, even if rare.  Disaster recovery solutions are a hedge against these unfortunate events.

Risk assessments for information systems help organizations to quantify their exposure to the unknown, and to estimate the potential impact to the organization if a threat is realized.  Risk assessments also provide an orderly way to prioritize system recoveries, so that a disaster recovery solution focuses on mitigating the largest risks to the most critical information systems.  As was pointed out at the presentation, payroll systems often seem the most critical systems, but the mitigations for the unexpected failure of a payroll system may not be a computer solution at all.  Instead, the organization may elect to simply pay employees cash based on their last pay check, and reconcile payments once the payroll system is available again.

Cloudly with a Chance of Computing

The marketing team that originally went with “cloud computing” to describe various information services that are hosted outside of an organization’s walls may not have read The Clouds by Aristophanes.  In fact, they may have been inspired by the many Visio drawings of network engineers that had a little cloud to represent some wide area network.  Or maybe the originators of this appellation all live in a cloudy city where it rains all the time (like Seattle).  The truth is that it is difficult to know where these sorts of things get started.  But I was reading the lamentations of another writer about the return on investment of cloud computing (or should I say, lack of ROI), and his woes got me to thinking about clouds and whose computing they might actually benefit.

As a species, technology people are a suspicious bunch.  And many have control issues.  This may be the single largest cultural reason why organizations have such a hard time letting go of their core infrastructure or applications to an external vendor.

In spite of this, I don’t think any IT staff person would propose building their own search engine for the internet; everybody uses a “cloud” search engine like google or yahoo or bing to find stuff on the internet.  Even though the search engines actually keep track of what you are searching for and use that information to fine-tune their index (and potentially respond to subpoenas from people looking to sue or arrest you), I don’t think all that many IT people would go to senior management and say “I want $x billion in my budget to create a secure search engine for the internet for our organization.”  That’s because my hypothetical is silly.  Google is basically free (because it is paid for by advertisers), indexes an enormous amount of the internet, and is both relatively reliable (only one or two outages here and there) and relatively accurate in the results it returns.  Free is usually hard to beat, and when you throw in reliable with free, yep, time for a new project idea.

Now, a fair number of IT professionals are not thinking about internet search when they are considering a migration to cloud computing in their organizations.  I’d guess that core applications are on the list, like email, telephone services, document management, or other mission critical systems.  For those organizations that have gone through the pain of Microsoft Exchange 5.5, and the subsequent migrations to 2000, 2003, 2007, and maybe 2010, have experienced IT staff that can take a part and put back together an Exchange implementation, and experience little downtime today, cloud computing probably doesn’t make this better or less expensive.

Instead, cloud computing (like its older brother, the poisonous snake, the Application Service Provider (ASP)) is aimed at a different market segment.  For smaller organizations, who can’t afford a full time IT person and certainly aren’t going to pay for an Exchange specialist to be on staff, a cloud vendor is a reasonable alternative.  At $8 a month per mailbox, an organization of 10 users will pay a $1,000 per year to have their Exchange server hosted by a service provider like mailstreet.net – far less than the cost of an IT person and all of the licensing and equipment needed to host a server in house, never mind the backup, disaster recovery, and virus protection/anti-spyware services.

There are certainly downsides to placing your email with a cloud (what if the service provider goes out of business, what happens when you lose your internet connection, what if the service provider’s engineers keep reading your email), but I have a hint for IT people – most companies like cheaper whenever they can get away with it.  And in this case, spending $100,000 a year to have email in-house is hardly a good idea if you can do it for 1/100th of that cost by contract, unless of course your email (75% of which is spam and viruses) is so uber-important that you must have complete control over it.

The return on investment analysis will be different for more complex and proprietary systems.  While there may be plenty of cloud computing services offering you Microsoft Exchange, there are probably relatively few that will be able to offer hosting for the custom practice management systems for attorneys, or health records systems for physicians.  There are also more security and operational considerations for those sorts of systems – and a lower chance that a hosting provider will have the specialists on staff that you need to support that kind of system.  Notably, Lexis and Westlaw both provide what is essentially a hosted research service, and they are large enough to have teams of attorneys on staff to provide technical support to lawyers that use these services, but they appear to be the exception rather than the norm when it comes to other specialized systems.

So, in sum, cloud computing is aimed at providing services for organizations that can’t afford to host a system in-house, but can’t operate without access to the functionality of a particular application.  For organizations with existing IT staff and systems that work, I don’t see cloud computing easily supplanting either.  But then, google might release something that you can’t live without soon enough!

Online Marketing Update

For those of you that enjoy science experiments, advertising online can present a very interesting lab.  Your objective in advertising is to sell a product or service, in this case, legal services.  The measure of success is the amount, if any, of revenue you generate from your advertisements, taking into account the actual cost of advertising online.

There are a fair number of places to advertise online.  For the internet search market, Google, Yahoo and Bing are probably over 90% of the market based on usage statistics.  All of these services provide a way to advertise your web site.  These work by displaying ads that are triggered by the keywords that search engine users enter into the search form.  For example, if you create an ad that is tied to the keyword “copyright infringement,” and a user searches for that or a similar phrase, your ad will display alongside the indexed results from the search engine.  Where your ad appears in the results will depend on how much you have bid for the advertising space and also how relevant the search engine thinks your ad is (and the link it will take a user to) in relation to the words that are searched for.

You also have the option of advertising on certain social networking web sites, such as linkedin and facebook.  In the case of facebook, you write an ad that will display based on the demographics that you are targeting.  For example, facebook will target your ad to display to people living in Maryland that are older than 18, are male, and have a college degree.  Linkedin, by virtue of targeting working professionals, allows you the ability to target prospective customers based on their industry, job category, and location.

All of these services provide you with a way to pay for “clicks” or for individuals that see your ad and actually click on the link to travel to a page on your web site.  Where users end up on your site will likely determine whether you get a customer as a result.  So, if you were to write an ad looking for prospective clients with a pending divorce case, your “landing page” (the page that your ad will display when clicked on) should probably have information about your practice, your experience handling divorce cases, and a way to contact you to schedule time to meet.  A landing page that is not relevant to the search terms that led your user there will likely cause your visitors to quickly go somewhere else.

Not all advertising campaigns will lead directly to cash.  You may only want visitors to come to your site to learn more about you and to think of you the next time they have a legal problem that you can help them with.  In that case, you can set other goals for your advertising campaign, like, increasing the time that users spend on your site that find you through a search engine, or increasing the number of pages that users click through on your site.  You might also want to develop a following or a group of users that return to your web site over time by subscribing to an RSS feed of content from your web site or blog.  Or you may want to increase the number of people reading your tweets on twitter. Having certain, measurable goals helps you to determine if your ads are performing properly, whether your landing pages are structured properly, and whether your overall web site is properly organized for your prospective clients.

And, with online advertising, you can tinker with your advertisements over time to evaluate what ads brought clients to your site and which ads did not.  With some search engines, you can also develop graphical ads and run them alongside plain old text ads to see which works better for bringing in users to your web site, and ultimately, converting into paying customers.

The Mac Lawyer’s Task Manager

On January 23, I spoke at the MSBA’s Hanging out a Shingle conference for attorneys considering going out on their own to practice law.  I spoke about some of the technology needs and issues of solo attorneys and new firms.  As a Mac user, one of the attendees at the conference asked me what software I used to keep track of tasks, and whether I know of a task manager that would synchronize my tasks between my Mac and my iPhone.  I did not, but being the presenter I am, I told the audience I would look into it.

I’ve been testing out a software package called Things that was developed by Cultured Code.  Things runs on the Mac and provides you with a way of tracking what’s do, and also scheduling recurring tasks (like pay the firm credit card or bill your clients).  Things can synchronize with the task list that you can keep in Mail for the Mac, which can then be synchronized with MobileMe and be available within iCal (tasks in Mail will show on the right hand column of your calendars in iCal).  As for the iPhone, Cultured Code has also published an app that can synchronize with a desktop running the full program in the office.  In order to sync, the iPhone and Mac with Things running must both be on the same WiFi network.

So, for example, in the morning, you could open up Things on your Mac, manage your tasks for the day, and before you leave the office, open up your iPhone’s Things app, let it synchronize, and then leave.  As you get stuff done while out of the office, you can check off the items on your to do list (like go to the bank, mail those payments for your office rent, buy some more copier paper, and drop off some pleadings at the courthouse).  When you get back, you can open up Things on your iPhone (after connecting to your WiFi hotspot) and Things will update on your Mac, marking off as complete those tasks that you have completed out in the world.

As a result, for those of you addicted to checking off items from your to do list, you can now get your fix electronically on your iPhone, without having to carry a paper list, or manage two independent lists (one on your Mac and one on your iPhone).

And for those of you that want to have a single task list shared across multiple Macs, there is a way to do that as well.  If you subscribe to MobileMe, you have the option of synchronizing files on iDisk.  Things itself has a database file in XML that it uses to manage its data.  You can put that set of files onto iDisk and allow other Macs the ability to access that file and be updated with the latest list.  There is a complete post by another author here that explains the step by step.  Please note, however, that this MobileMe solution is not real time.  In testing, I’ve found that you generally need to keep Things closed on the other Macs until after changes to the XML file are fully synchronized, then open up Things on the other Mac to see the changes made.  Using iDisk may be a solution for a small office, but probably will not work for a large number of attorneys and paralegals working together.  Hope this helps.

Meaningful Use – Some Thoughts

HHS and CMS have released the regulations as promised to help define the phrase “meaningful use” that can be found within ARRA and will determine which health care professionals have been naughty and will receive no incentive payments from Uncle Sam, and those that have been nice and will.  The regulations themselves are long.  I can’t be critical on length alone; the regulations reflect the complexity of the area they intend to regulate.  To date, the regulators have drafted the Stage 1 measures for meaningful use.  These measures will determine whether the relatively early adopters of EHRs will receive incentive payments under Medicaid or Medicare (if the provider otherwise qualifies).

This post takes a closer look at the Stage 1 criteria.  There are a number of requirements that are basic to any self-respecting EHR, such as §§ 495.6(c)(1) drug interaction checking, (2) a problem list for each patient, (3) a medication list, (4) an allergy list, (5) basic patient demographics, (6) basic vital signs, and (7) the patient’s smoking status.  Most systems will store this kind of data in discrete data fields and can make this information available to be queried for reporting.  Section 495.6(c)(8) mandates that lab data reported back to the provider be stored in a structured format.  This is also a basic dimension of an EHR, though it takes more effort to get this to work efficiently (including someone with the job of maintaining the mappings that take reported lab results and place them in specific data elements in the database).

Section 495.6(c)(9) mandates that the provider be able to generate a list of patients by disease state.  Assuming that patient diagnoses are stored in a structured format, this also should not be too difficult to address with most systems.  The medical staff would need to provide some data definitions (for example, the diagnosis codes 042 and V08 both mean HIV; a series of diagnosis codes that start with 250 mean diabetes, and so on).

Section 495.6(c)(10) mandates that there be five decision support rules that can be built into the EHR and that are specialty or priority-specific.  For example, all HIV patients in care should have an HIV viral load test performed at least every 4-6 months.  Some systems may not support these kind of point-of-service reporting tools (so that the provider is reminded when in the exam room with the patient), but presumably a reporting tool that generated reminders to patients to receive a particular test or service might meet this requirement.

However, the regulations take a turn at (c)(11) when they mandate the use of electronic eligibility data and the submission of electronic claims data.  These are both not the typical province of an EHR, but of a practice management system.  And while there have been ANSI standards for electronic eligibility data published for years, there are still some insurers that cannot produce useable data for eligibility verification.

Section 495.6(c)(13) calls for a medication reconciliation at each office visit with the patient.  I presume the intent here is to have the provider ask the patient to verify that all these pills listed in the EHR are really what the patient is taking.  I’m not sure asking this question at every visit will be practical with every patient – particularly with the patients at the most risk for interactions – those on a large number of different drugs.  Health information exchanges may help to tame some of this by presenting to the physician listings of drugs associated with the patient from multiple sources, but truthfully, this may quickly become bewildering for both the patient and provider.

Section 495.6(c)(14) calls for a record summary to accompany each referral for specialty care.  With the paper referral system today, this will increase the amount of paper shared between practices.  I would hope this requirement would push more providers into participating in an HIE so that this kind of thing could be shared electronically.

Section 495.6(c)(15) and (16) are addressed to sharing data with certain governmental agencies for tracking patient immunizations and reportable diseases that are surveilled by local health departments.  Presumably both of these would be better addressed by having the government agency participate as a recipient of data from an HIE, rather than building an interface directly from a provider to the agency requesting the information.  The issue here, however, is that the items to be reported are probably not likely to be initially available from the HIE because these data elements may or may not be consistently stored across EHR systems (particularly immunizations; reportable conditions are often keyed to a particular diagnosis code, such as the codes for syphilis or HIV, and problem lists are more often consistently stored as structured data).

And, even though risk assessments have been mandated since 2003 within the HIPAA security regulations, CMS felt that this specific requirement needed to be reiterated within the meaningful use regulations.  My guess: most providers don’t regularly perform risk assessments because they are time-consuming, and information systems change to frequently for the risk assessment process to keep track.

Section 495.6(d) provides another 8 requirements for providers.  Notably, the regulations mandate direct patient access to their health record chart electronically, and the ability to feed data to patients on request (for example, for patients with a personal health record that want to get a live feed of lab results and medications from their doctor).

Overall, the regulations are substantial.  Some of the requirements in the regulations will cause some consternation for providers and will likely lengthen the time to implement EHRs for some organizations that were focused on the basics of just getting the visit documented in the system.