News

The Battle Over Health IT Has Begun

The battle lines on how to spend the money for technology to improve health care are beginning to be drawn.  As a former director of an IT department at a health center which implemented a proprietary health record system in 2003, I can offer a useful perspective on some of the issues.  Phillip Longman’s post on health records technology discusses the issue of using a closed versus an open source health records system, which is part of the larger debate on open source and its impact on application development online.

I’m generally a fan of the open source community.  The shareware people were developing useful applications and offering them to the public ever since I started using a PC as a kid back in the 80’s.  There is a lot to be said for application development that is done in a larger community  where sharing is ok.  For example, my blog is a WordPress blog, which is an open source blogging software which provides a platform not just for writers like me, but also for developers to create cool plugins for WordPress blogs that do all sorts of nice things like integrate with Google Analytics, backup your blog, or modify your blog’s theme, just to name a few that I happen to use regularly (thanks all of you that are linked to).

In 2003, we looked at a number of health records systems, ultimately allowing our user community at the time to choose between the two finalists, both of which were proprietary systems.  One of my observations at the time was that there was a wide array of information systems that were available to health care providers, some of which were written by fellow practitioners, and others that were written by professional developers.  I would be willing to bet that today there are even more health IT systems out in the market place.  We ended up going with a product called Logician, which at the time was owned by MedicaLogic (now a subsidiary of the folks at GEMS IT, a division of General Electric).

Logician (now called Centricity EMR) is a closed source system that runs in Windows, but allows for end users to develop clinical content (the electronic equivalent to the paper forms that providers use to document care delivery) and to share that clinical content with other EMR users through a GE-hosted web site for existing clients of the system.  In addition, Logician has a substantial following to support a national user group, CHUG, and has been around long enough for a small cottage industry of developers to create software to integrate with Logician (such as the folks at Kryptiq, Biscom, and Clinical Content Consultants who subsequently sold their content to GEMS IT for support).

After six years of supporting this system, I can assure you that this technology has its issues.  That’s true, of course, of all most information systems, and I would not suggest that the open source community eclectic collection of developers is necessarily any less buggy or any easier to support.  And, in fact, I don’t have any opinion at all as to whether health records would be better off in an open source or proprietary health record system.  Health professionals are very capable of independently evaluating the variety of information systems and choosing a system that will help them do their jobs.  One of the big reasons that these projects tend to fail is a lack of planning and investment in the implementation of the system before the thing gets installed.  This process, which, when done right, engages the user community in the project to guide it to a successful go live, is probably more important and actually takes more effort than the information system itself.

Mr. Longman criticizes the development model of “software engineers using locked, proprietary code” because this model lacks sufficient input from the medical users that ultimately must use the system in their practices.  I suppose there must be some health records systems out there that were developed without health provider input, but I seriously doubt they are used by all that many practices.  I do agree with Mr. Longman that there are plenty of instances where practices tried to implement a health records system and ended up going back to paper.  We met several of these failed projects in our evaluation process.  But I would not conflate proprietary systems with the failure to implement; proprietary systems that actually include health providers in their development process can be successfully implemented.  Open source can work, too.  As Mr. Longman points out, the VA Hospital system has been using an open source system now called VistA which works for the VA hospital system’s closed delivery system (patients at the VA generally get all of their care at a VA institution and rarely go outside for health care).

My point is that the labels “open source” and “proprietary” alone are not enough to predict the success or failure of a health records system project.  Even a relatively inexpensive, proprietary, and functionally-focused system that is well implemented can improve the health of the patients served by it.  There is a very real danger that the Obama administration’s push for health IT will be a boondoggle given the scope and breadth of the vision of health IT in our country.  But the health industry itself is an enormous place with a wide variety of professionals, and the health IT market place reflects this in the varied information systems (both open source and proprietary) available today.  I would not expect there to be any one computer system that will work for every health care provider, regardless of who actually writes the code.

RIAA & Copyright: $2 Million Fine Whacky Consequence of Copyright Act

The RIAA, in a new trial of an alleged copyright infringer who had shared 24 songs via the file-sharing service Kazaa, won almost $2 million in fines against the defendant calculated on the basis of statutory damages under the U.S. Copyright Act.  (See the Wired Story here)  This is an almost ten fold increase in the fines awarded to the RIAA against Thomas-Rassert from the original trial in 2007.  The basis for these fines is a provision of the Copyright Act, codified at 17 U.S.C. § 504(c)(2), which allows for a maximum fine of $150,000 on the finding of willful infringement by the defendant.  Statutory damages are available to a plaintiff who elects to not seek actual damages for the infringement proved during the trial.

In this case, the defendant Thomas-Rassert had been sharing 24 songs online.  Assuming that she had a typical aDSL connection where the upload speed is considerably slower than the download speed, and the average size of each file shared was about 3 megabytes, she would have been able to share about 0.22 songs per minute to other users of Kazaa.  If each song would have cost $1 to purchase as a single from a reputable vendor (like itunes), and she shared these 24 songs continuously for a year, the amount of lost sales to the music industry would have been about $115,632 (with about 20% of this going to the reseller and not the music companies), or about 1/20th of the damages award against her for her infringement of the plaintiff’s copyrights.  Thomas-Rassert’s own estimate of the actual damages proved by the plaintiff was even smaller – on the order of $150.  (See the filing seeking remittitur after the original trial resulted in a $222,000 verdict against her)

I certainly do not condone copyright infringement, but the damages sought by the RIAA in this case are highly disproportionate to the alleged injury to the copyright holders.  Seeking such a large fine against an individual reflects to me, at least, the frustration the RIAA has had in pursuing the makers of the file sharing platforms directly, many of whom are either out of the RIAA’s legal reach or otherwise judgment proof.  While I would not call Thomas-Rassert an “innocent infringer,” nor a “fair user” given the prior Supreme Court jurisprudence holding otherwise, I also would not call her a “pirate” worthy of the civil version of a hanging, either.  Let’s hope the judge has the good sense to reduce the fines to a more reasonable level.

How Virtualization Can Help Your DR Plan

Virtualizing your servers can help you to improve your readiness to respond to disasters, such as fires, floods, virus attacks, power outages, and the like.  Popular solutions, such as VMWare’s ESX virtualization products, in combination with data replication to a remote facility, or backups using a third party application like vRanger can help speed up your ability to respond to emergencies, or even have fewer emergencies that require IT staff to intervene.  This article will discuss a few solutions to help you improve your disaster recovery readiness.

Planning

Being able to respond to an emergency or a disaster requires planning before the emergency arises.  Planning involves the following: (1) having an up-to-date system design map that explains the major systems in use, their criticality to the organization, and their system requirements; (2) having a policy that identifies what the organization’s expectations are with system uptime, the technical solutions in place to help mitigate risks, and the roles that staff within the organization will play during an emergency; and (3) conducting a risk assessment that reviews the risks, mitigations in place, and unmitigated risks that could cause an outage or disaster.

Once you have a system inventory, policy and risk assessment, you will need to identify user expectations for recovering from a system failure, which will provide a starting point for analyzing how far your systems are from user expectations for recovery.  For example, if you use digital tape to perform system backups once weekly, but interviews with users indicate an expectation that data from a particular system can’t be recovered manually if a loss of more than a few hours is experienced, your gap analysis would indicate that your current mitigation is not sufficient.

Now, gentle reader, not all user expectations are reasonable.  If you operate a database with many thousands of transactions worth substantial amounts in revenue every minute, but your DR budget is relatively small (or non-existent), users get what they pay for.  Systems, like all things, will fail from time to time, no matter the quality of the IT staff or the computer systems themselves.  There is truthfully no excuse for not planning for system failures to be able to respond appropriately – but then, I continue to meet people who are not prepared, so…

However, user expectations are helpful to know, because you can use them to gauge how much focus should be placed on recovering from a system failure, and where there are gaps in readiness, seeking to expand your budget or resources to help improve readiness as much as feasible.  Virtualization can help.

Technology

First, virtualization generally can help to reduce your server hardware budget, as you can run more virtual servers on less physical hardware – especially those Windows servers that don’t really do that much (CPU and memory) most of the time.  This, in turn can free up more resources to put towards a DR budget.

Second, virtualization (in combination with a replication technology, either on a storage area network, such as Lefthand, or through another software solution, for example, Doubletake) can help you to make efficient copies of your data to a remote system, which can be used to bring a DR virtual server up to operate as your production system until the emergency is resolved.

Third, virtual servers can be more easily backed up to disk using software solutions like vRanger Pro, which can in turn be backed up to tape or somewhere else entirely.

Virtualization does make recovery easier, but not pain-free.  There is still some work required to make this kind of solution work properly, including training, practice, and testing.  And you will likely need some expertise to help implement a solution (whether you work with a VMWare, Microsoft, or other vendor for virtualization).  On the other hand, not doing this means that you are left to “hope” you can recover when a system failure occurs.  Not much of a plan.

Testing and Practice

Once the technology is in place to help recover from a system failure, the most important thing you can do is to practice with this technology and the policy/procedure you have developed to make sure that (a) multiple IT staff can successfully perform a recovery, (b) that you have worked out the bugs in the plan and identified specific technical issues that can be worked on to improve the plan, and (c) that those who will participate in the recovery effort can work effectively under the added stress of performing a recovery with every user hollering “are you done yet?!?”.

Some of the testing should be purely technical: backing a system up and being able to bring it up on a different piece of equipment, and then verifying that the backup copy works like the production system.  And some of the testing is discussion-driven: table-top exercises (as discussed on my law web site in more detail here) help staff to discuss scenarios and possible issues.

All of the testing results help to refine your policy, and also give you a realistic view of how effectively you can recover a system from a major failure or disaster.  Some systems (like NT 4.0 based systems) will not be recoverable, no matter what you may do.  Upgrading to a recent version of Windows, or to some other platform all together, is the best mitigation.  In other cases, virtualization won’t be feasible because of current budget constraints, technical expertise, or incompatibility (not all current Windows systems can be virtualized because the system has unique hardware requirements, or otherwise won’t covert to a virtual system).  But, there are a fair number of cases where virtualizing will help improve recoverability.

Summary

Virtualization can help your organization recover from disasters when the technology is implemented within a plan that is well-designed and well-tested.  Feedback?  Post a comment.

Copyright and Fair Use

Jeff Koons, an “appropriation artist” who has been known for controversy in his career, has tested the limits of fair use under the Copyright Act in two cases: Rogers v. Koons, 906 F.2d 301 (2nd Cir. 1992), and Blanch v. Koons, 467 F.3d 244 (2nd Cir. 2006), with opposite results.

The United States Copyright Act grants to owners of copyrights exclusive rights under section 106, including the right of reproduction, preparation of derivative works, and to distribute copies of the copyrighted work.  17 U.S.C. § 106 (2007).  These exclusive rights, however, are subject to certain exceptions enumerated in the statute, including “fair use” as it is defined under section 107.  Besides certain kinds of academic and journalistic uses, a party can use a copyrighted work if the use falls within the factors described in section 107, namely: (1) the character of the use, (2) the nature of the protected work, (3) the amount of the work used, and (4) the effect of the use on the market for the copyrighted work.  Id. at § 107.

Under the first factor, the court weighs whether the alleged infringer stood to profit from his use of the copyrighted work without paying the customary price.  Brown v. McCormick, 23 F. Supp. 2d 594, 607 (D. Md. 1998).  While the fact that an alleged infringer profits from his use is not necessarily dispositive of his fair use defense, that there are profits without compensation to the copyright holder will weigh heavily against the “fair” user.  See Rogers, 906 F.2d at 309 (citing Sony Corp. v. Universal Studios, 464 U.S. 417, 449 (1984)).

Under the second factor, the court weighs whether the copyrighted work is more like a compilation of facts (like a phone book) or more like a creative work (like a painting).  More protection is offered to a work the more creative it is under this factor.  Brown, 23 F. Supp. 2d at 607.

Under the third factor, the court weighs how much of the copyrighted work was used by the alleged infringer in the subsequent work at issue, and how substantial that amount was as compared to the copyrighted work as a whole.  Taking a small portion of a copyrighted work that is not central to the work’s theme or thesis will tend to more often than not be protected as a fair use.  Id.

Under the final factor, the court weighs what impact the fair use had on the markets for the copyrighted work.  For example, if the copyrighted work was a photograph, and the alleged fair use was also a photograph that was sold to the same market as the copyright holder without compensation or royalties to the copyright holder, this factor would weigh heavily against a finding of fair use.  Id. at 608.

In Rogers, Jeff Koons had taken a photograph made by Art Rogers that Koons had purchased in the form of a post card in a tourist card shop and provided it to his artisans to copy for the creation of a sculpture that was to form a portion of Koons’ “Banality Show,” which opened on November 19, 1988 at Sonnabend Gallery.  Rogers, 906 F.2d at 305.  Koons provided specific instructions that the photograph was to be copied faithfully into the resulting sculpture, and visited the artisans weekly who were contracted to create the sculpture to ensure compliance with Koons’ directions.  Id.
The Second Circuit found that Koons’ use was not a fair use within the meaning of section 107 of the Copyright Act.  Id. at 312.

In Blanch, Koons had again appropriated an element of a photograph made by Blanch that was used in an commercial advertisement in the August 2000 issue of Allure Magazine.  Blanch, 467 F.3d at 247.  In this case, Koons took the legs and feet of the female model in Blanch’s photograph and then inverted the orientation of the legs so that they dangled vertically rather than horizontally as in the original photograph, and a heel was added to one of the model’s feet.  The modified legs and feet were then incorporated into a painting by Koons entitled “Niagra.” Id. at 248.

The Second Circuit found that Koons’ use was a fair use under section 107.  Id. at 259.  Why Blanch came out differently than Rogers, however, is not self-explanatory.

The Second Circuit in Blanch spends a fair amount of time around the idea that fair use is about the transformation of an existing copyrighted work into a new work with new insights and understandings about the original matter.  Koons, according to the Court, had aimed at a kind of criticism of the aesthetic employed by the advertisement in Allure Magazine, so that a neutral party would understand Koons to be commenting on the purpose of the protected photograph through his painting.  Id. at 252.

Interestingly, Koons’ sculpture that was based on Rogers’ photograph was in the context of criticism of the “banality” of this photograph, the consequent cheapening of art by its commercialization, and the resulting deterioration in the quality of society as a whole.  Rogers, 960 F.2d at 309.  The Court disagreed that Koons’ work was readily criticism in Rogers because the sculpture did not communicate that it was actually based on Rogers’ original photograph.  Id. at 310.  Of course, the painting in Blanch arguably did no better a job of announcing which or whose photograph it was actually based, even though Koons did do more work with his computer to crop and alter the original photograph before placing it into its new “context.”  Perhaps the lesson of these two cases is that painters should use the basic crop and rotate tools in Photoshop if they wish to infringe upon another’s copyrighted photograph!

Accepting that Koons’ main thrust is a social criticism of our society’s materialism, I would imagine that at least some artists (and some sophisticated members of the public) that view his work get his message.  If this wasn’t true, there would probably not be such a market to support his work.  Koons provokes a response from some artists that because his works are merely a commodity, they are not art at all.  Others are willing to accept his work in the tradition of Duchamp and Warhol, who made things that were on the fringe of what was acceptable as “art.”  Whether the general public understands what Koons’ work is about is probably a separate matter, just as it would have been for earlier artists that were pushing on the traditional notions and boundaries of art and criticism.

Does “fair use” turn on whether the public at large (the omnipresent “reasonable person” in the law), represented by the judge assigned to hear the case, understands the message of the work?  If the definition of art turned on this understanding, much art would not be recognized as “art” at all – at least not by the contemporaries of the work being defined.  Whether a work is “fair use” is a less philosophical question than whether a work is “art,” but I would argue that the question is not made easier by this distinction.  The authors of the Copyright Act acknowledged that the exclusive rights protected by the law had to be limited for the benefit of the public, and that courts (of the dubious options of the executive or the legislature) were in the best position to equitably balance these two competing interests.

But is this really a fair standard for adjudicating a copyright dispute?  The Second Circuit in Rogers stated that the “copied work must be, at least in part, an object of the parody….” They continue “[w]e think this is a necessary rule, as were it otherwise there would be no real limitation on the copier’s use of another’s copyrighted work to make a statement on some aspect of society at large.”  Id. I think most would agree that there must be some objective standards by which to find infringement or not.  But perhaps this reasoning was really just cover for the underlying feeling of the court that Koons was taking advantage of another without paying the customary fee.  The Second Circuit ultimately held in Rogers that Koons had acted in bad faith and profited substantially from his sculpture without compensating the original photographer. Rogers, 960 F.2d at 310, 305.  Somehow (maybe Koons got better attorneys to represent him), though, this sense of bad faith left the view of the court in Blanch, even though Koons most certainly profited from his unlicensed use of Blanch’s photograph (Koons’ works in both these cases sold for over $100,000 each). Blanch, 467 F.3d at 248.

Where does this leave artists?  Well, for most, these kinds of questions are academic, as most artists do not have the financial wherewithal to go through civil litigation to avoid paying a royalty to a photographer.  “Transforming” and not acting in “bad faith” appear to be the guide posts for acceptable fair use, but there would seem to be a rather wide field between these two guide posts.  Perhaps the next Koons lawsuit will settle things!  Stay tuned.

Trademark Infringement & Starbucks

Starbucks is a well known, international purveyor of coffee products, with thousands of stores throughout the world.  Starbucks v. Wolfe’s Borough Coffee, Inc., No. 01 Civ. 5981 (LTS)(THK), 2005 U.S. Dist. LEXIS 35578 (S.D.N.Y. Dec. 23, 2005) (Starbucks I).  Starbucks Corporation was formed in 1985 in Washington State, after the original founders had been in business for themselves since 1971 in the Seattle Pike’s Place Market.  Id. at *3. Under a traditional trademark analysis, Starbucks has spent a substantial amount of money to market its coffee products worldwide (over one hundred thirty-six million dollars worth from 2000-2003).  Id. at *5.  One should not use a trademark similar to “Starbucks” without expecting trouble.

In 2004, Wolfe’s Borough Coffee, a small coffee manufacturer that distributes its brands in a store in New Hampshire and through some New England supermarkets, was sued by Starbucks in the southern district of New York for trademark infringement and dilution under the Lanham Act and state law.  Id. at *6.  Wolfe’s Borough Coffee was trading with two allegedly infringing names: “Mr. Charbucks” and “Mister Charbucks,” both similar to the trademark “Starbucks” used by the famous coffee house of the same name.  Starbucks v. Wolfe’s Borough Coffee, Inc., 559 F. Supp. 2d 472 (S.D.N.Y. June 5, 2008) (Starbucks III).  Yet, Starbucks lost in district court on all of its claims.  Starbucks I, 2005 U.S. DIST LEXIS 35578 at *29.  Starbucks appealed, the second circuit reversed in 2007 because of a change to the Lanham Act in 2006 by Congress through the Federal Trademark Dilution Act, and the trial court affirmed its prior decision in favor of the defendant in 2008.  Starbucks v. Wolfe’s Borough Coffee, Inc., 477 F.3d 765 (2nd Cir. 2007) (Starbucks II); 15 U.S.C. §§ 1125(c), 1127 (2008); Starbucks III.

Starbucks Claims

Starbucks sued Wolfe’s under federal and state law, alleging trademark infringement under sections 1114 and 1125(a) of the Lanham Act, trademark dilution under sections 1125(c) and 1127 of the Lanham Act and also under New York law, and unfair competition under state common law.  15 U.S.C. §§ 1114(1), 1125(a) (2008); Id. at §§ 1125(c), 1127; N.Y. Gen. Bus. Law § 360-1 (1999).  This case note will focus on the allegation of trademark dilution.

In order to prove trademark dilution, the plaintiff must demonstrate that (a) the plaintiff’s mark is famous, (b) the defendant is using commercial use of the famous mark, (c) the defendant’s use came after the plaintiff’s use, and (d) the defendant’s use of the plaintiff’s mark dilutes the plaintiff’s mark.  Starbucks I, 2005 U.S. DIST LEXIS 35578 at *22.  The defendant had conceded the first three elements leaving only the last element of the rule in dispute.  Id.

Moseley v. Victoria’s Secret Catalogue, Inc., 537 U.S. 418, 433 (2003) requires a plaintiff to prove actual dilution rather than a likelihood of dilution in order to prevail under the Lanham Act anti-dilution section.  New York law is less stringent than federal law in this area, and the court reasoned that if the plaintiff could not prevail under state law, it also could not prevail under federal law.  Starbucks I, 2005 U.S. DIST LEXIS 35578 at *25.  The court examined the likelihood that the defendant’s use of its marks would either blur or tarnish the plaintiff’s marks, and concludes that plaintiff could not prevail under either standard.  Id. at *30.  Blurring occurs when a defendant uses the plaintiff’s mark to identify defendant’s products, increasing the possibility that the plaintiff’s mark will no longer uniquely identify plaintiff’s products.  Id. at *25.  Tarnishment occurs when a plaintiff’s mark is associated with products of a shoddy or unwholesome character.  Id. at *26.

The court’s review of the record caused it to conclude that the plaintiff had failed to demonstrate actual or likely dimunition “of the capacity of the Starbucks Marks to serve as unique identifiers of Starbucks’ products…” because the plaintiff’s survey results did not show an association with the defendant and the mark “Charbucks,” only that respondents associated the term “Charbucks” with “Starbucks.”  Id. at *27.  The court also held that the plaintiff’s survey results did not substantiate that the mark “Charbucks” would reflect negatively on the Starbucks brand.  Id. Plaintiff therefore lost on its dilution claims.

Change in Dilution Act

Prior to 2006, dilution of a famous mark required that the plaintiff demonstrate actual dilution to prevail under section 1125(c) of the Lanham Act.  Moseley, 537 U.S. at 433.  However, Congress amended the applicable statute to only require that the defendant’s use was “likely to cause dilution.”  Starbucks II, 477 F.3d at 766.  The second circuit held it was not clear if the amended Lanham Act’s prohibition of dilution of famous marks was coextensive with New York law, the latter being the basis for the trial court not finding dilution of Starbucks’ marks.  Id. Therefore, the appeals court vacated the trial court’s judgment and remanded for further proceedings.  Id.

On Remand

The district court took back up the Starbucks case under the amended anti-dilution statute.  To demonstrate blurring of a famous mark, the amended Lanham act requires a court to consider all relevant factors including: “(i) [t]he degree of similarity between the mark or trade name and the famous mark[;] (ii) [t]he degree of inherent or acquired distinctiveness of the famous mark[;] (iii) [t]he extent to which the owner of the famous mark is engaging in substantially exclusive use of the mark[;] (iv) [t]he degree of recognition of the famous mark[;] (v) [w]hether the use of the mark or trade name intended to create an association with the famous mark[;] and (vi) [a]ny actual association between the mark or trade name and the famous mark.”  Starbucks III, 559 F. Supp. at 476 (citing 15 U.S.C. § 1125(c)).

Degree of Similarity

The district court held that a plaintiff must demonstrate under this element that the marks are very or substantially similar.  The court pointed out that the defendant’s marks appear on packaging that is very different from the plaintiff, and the defendant used the rhyming term “Charbucks” with “Mister,” where Starbucks appears alone when used by the plaintiff, therefore the court found this factor to weigh against the plaintiff.  Id. at 477.

Distinctiveness of Starbucks Mark

Given the extent of the use of the Starbucks mark by plaintiff and the amount of money expended by the plaintiff in its marketing program, the court found this factor favored the plaintiff.  Id.

Exclusive Use by Starbucks
The fact that the plaintiff polices its registered marks, and the amount of money spent on using the mark both led the court to weight this factor in favor of the plaintiff.  Id.

Degree of Recognition of Starbucks’ Mark

Again, given the longevity and number of customers that visit Starbucks stores, the court found this factor to favor the plaintiff.  Id.

Defendant’s Intent to Associate with Starbucks’ Mark

The court finds that while the defendant intended to allude to the dark roasted quality of Starbucks brand coffees, the fact that the marks are different and the defendant had not acted in bad faith led the court to weigh this factor in favor of the defendant.  Id. at 478.  The court reasoned that the defendant used this mark to distinguish its own lines of coffee products, with the Mr. Charbucks brand being the dark roasted coffee as compared to other Wolfe’s Borough/Black Bear coffees.  Id.

Actual Association with Starbucks’ Mark

Here, the court found that while there was an association with the Starbucks’ mark to some respondents to the survey conducted by Starbucks, this association alone is not enough to find dilution.  Id. Instead, the court found that the defendant’s marks would not cause customers to confuse the defendant’s products with the plaintiff’s.  Rather, customers would tend to see the playful reference to a quality of Starbucks’ coffee – the dark roast – to distinguish one kind of Wolfe’s Borough brand coffees from other Wolfe’s Borough brand coffees.  Id.

Tarnishment Analysis

The amended Lanham Act also provides a specific definition for dilution by tarnishment: “[an] association arising from the similarity between a mark or trade name and a famous mark that harms the reputation of the famous mark.”  15 U.S.C. § 1125(c)(2)(C) (2008).  The court held that the plaintiff’s survey evidence could not support a finding of dilution by tarnishment, because the plaintiff’s survey was susceptible to multiple and equally likely interpretations.  Starbucks III, 559 F. Supp. at 480.  In addition, the court found that the defendant’s coffee products were not of actual poor quality, so any actual association between the defendant’s coffees and Starbucks would not likely be damaging to Starbucks.  Id.

As a result, Starbucks lost its case on remand for trademark dilution.  One might almost say that Starbucks has become so synomous with quality dark roasted coffees that their brand name can’t be diluted by other quality coffee brands.  Instead, the Starbucks mark is a victim of its own success in the world.  Add that to the list of reasons why a Starbucks on every street corner is not a good idea.

Note: This post was originally published in the Annual Intellectual Property Law Update, volume II, June 2009, Maryland State Bar Association Intellectual Property Section – Publications Committee.

iPhone 3.0 Software Available

For iPhone users, you now can download version 3 of the operating system for your phone, but you may be waiting a while for the iTunes store to let you get the update.  The update itself is about 258 mb, which will cause some waiting all by itself, but because there are so many other people downloading this update, you will be waiting for pretty much anything else that you need from the store (like updating apps on your iPhone for example).

However, the update of my 3G phone went just fine this morning, so hopefully you iPhone users out there will have a similar experience.  Click here for Apple’s official info on what’s new in the latest update.

June 22 Update.

So there are a number of noticeable improvements with the iPhone OS version 3.0.  For one, there is an overall search function for the iPhone, which now lives on the first page of applications of the phone.  This search goes across a variety of entities in the phone, such as your contacts, music, email, installed apps, and calendar, so you can find stuff faster.

Your call log now shows you which phone entry you called, and if not in your contacts, the phone will tell you the city and state of the phone number in the log.  This saves you a tap to find this info.

There is now also a search function built into your email.  This is particularly helpful given that you probably have a fair amount more messages on the mail server to which you are pushing via IMAP.  For example, my gmail account today has about 3,000 messages stored in my mailbox, so being able to search through this to find a message is a big help.  You can also turn your phone sideways to look at your messages in landscape mode, which is helpful when you are looking at attachments or just reading your messages.

I also noted that the stock quote app that comes with the phone now has more information on each stock that you might be following, including high/low, P/E, market cap., and news.  And you can look at the stock price graphs for each stock in landscape mode by just turning your phone ninety degrees.

There are a number of other nice add-ons to the phone that you can investigate further on Apple’s site.  Happy i-Phone-ing!

Updates on Online Marketing Efforts

A few months ago, I wrote some articles here about efforts to market my law practice online through Yahoo and Google search advertising.  Here are some preliminary results on the efforts and plans to improve my marketing efforts.

More Content = More Visitors

One of the essential rules of web site design is that the more content on your site, the more chances you have of being indexed against keywords that internet users may be using to search.  Therefore, you increase the number of people that visit your site when you have more indexed content, which increases the chances of picking up a new client through the site or impressing one of your colleagues with your knowledge such that they refer you a prospective customer or two.

Of course, adding content is a time-intensive kind of thing, even if you write blog articles day and night.  One of the good things about google and yahoo is that the search engines will come back and visit your site for new content on a regular basis (google checks my site more than once per week, and you can pay for priority indexing by yahoo so they visit your site at least weekly for content changes and indexing).

Pushing Out Newsletters

Pushing out newsletters to subscribers, with links to return users to your site that receive and wish to read the newsletter’s full content also helps to drive users to your site (and adds content directly to your site for the hungry search engines).  Over time, I definitely see spikes in activity on my site around the times that I send out newsletters via email to subscribers.

Watch What Keywords Bring Users to Your Site

Google’s Analytics keeps track (for the web site pages that you embed the javascript needed to collect data on site visitors) of the keywords that bring users to your web site.  These keyword statistics help to determine what brought a user to the site, and may lead you to change your site content to either encourage or discourage the kinds of visitors that are reaching your site.  For example, I had written a newsletter about the new Massachusetts law that is aimed at protecting consumer data collected by businesses in that state.  (See 201 CMR § 17.00).  A number of users have found my site because of this newsletter, particularly in their searches by citation to the statute itself.

On the other hand, some users looking for a bankruptcy or divorce attorney have also landed on my web site, which suggests that my site has been indexed under overly general keywords related to law, or the advertisements that I have running on yahoo or google (if the source of some of these visitors) are not specific enough in their focus (again, based on their keywords).  For example, my web site does show up in the third page of search results when searching with google for “disaster recovery table top exercise,” but I would be happier if my site was closer to the first page of results.

Related to this are the web sites that refer visitors to the web site.  Interestingly, facebook is my top referring web site, followed by linkedin (where I have a professional profile), and then some web sites that I don’t recognize but apparently have indexed my site into their search results for one reason or another.  Same question of whether to encourage or discourage such links based on the content of your own site.

Twitter and Tweeting

So far, I haven’t done much tweeting out in the world of twitter.  I guess as an attorney, 140 characters is just too restrictive.  Perhaps if I start writing haikus, twitter would be the place to publish them!  I see that Iran’s election results are being tracked by twitterers in Iran, so maybe if I was at a live event like Apple’s annual trade show or another large meeting, I’d be more prone to twitter away (which I can do from my iPhone if I were so inclined).  WordPress does support integration with twitter via a plugin, so if you want to be able to put your tweets into a digest form and load them automatically to your blog, you can.  Time will tell if twitter ends up being useful to market a law firm.

Most Importantly…

Keep working at it and don’t be afraid to try new things.  Google Analytics (or a similar web tracking software package that you can use on your web site logs) will help you to figure out why people come to your site and what they spend time looking at on it.  And be patient – online marketing is a fair amount like fishing.  Some days you come home with nothing, and other days, you find a place pre-stocked with your favorite fish and you come home fat and happy!

Virtualization Primer

The dictionary definition of “virtual” is being in essence but not in fact.  That’s kind of an interesting state when you are talking about computers.  The reason is that for most folks, a web server or some other mysterious gadget that makes their email work is probably “virtual” to them already because most users don’t see the server itself in action, only its result (a web page served up or an email delivered to their inbox).  Virtualization is one of those “deep IT” concepts that the average person probably doesn’t pay much attention to.  But here is conceptually what’s going on.

Back in the bad old days of computing, if you wanted to have a Microsoft Windows Server installed and available for use, you would go out and buy yourself a physical piece of computing equipment, complete with hard drives, RAM, video card, motherboard, and the rest, and install the Windows operating system to it.  If you needed another server (for perhaps another application like email or to share files), you would go out and buy a new piece of hardware and another license for Windows Server, and away you would go.  This kind of 1 : 1 ratio of hardware to server installation was fine until you had more than a few servers installed on your network at home and the AC couldn’t keep up with the heat output of your computer equipment.

So a bunch of very smart people sat down and asked how this could be handled better.  I’m sure someone in the room said just buy much more expensive hardware and run more applications on the same physical server.  This is in fact the model of larger businesses back in the worse old days of mainframe computing (oh yeah, people still use mainframes today, they just keep them in the closet and don’t advertise this to their cool friends who have iPhones and play World of Warcraft online).

But the virtualization engineers weren’t satisfied with this solution.  First off, what happens when the hardware that everything is running on, fails?  All of your eggs, being in one basket, are now toast until you fix the problem, or bring up a copy of everything on another piece of equipment.  Second, what happens when one of those pesky applications decides to have a memory leak and squeezes everybody else off the system?  Same as number one above, though the fix is probably quicker because you can just bounce that ancient mainframe system (if you can find the monk in the middle ages monastery that actually knows where the power button is on the thing, that is).  Third, mainframes are really pretty expensive, so not just any business is going to go and buy one, which means that a fair amount of the market for server equipment has been bypassed by the mainframe concept.  And finally, mainframes aren’t cool anymore.  No one wants to buy something new that isn’t also cool.  Oh wait, I doubt the engineers that were sitting in the room having a brainstorming session would have invited the marketing department in for input this early on.  But it is true – mainframes aren’t cool.

So, this room of very smart people came up with virtualization.  Basically, a single piece of computing hardware (a “host” in the lingo) can be used to house multiple, virtual instances (“virtual machines”) of complete Windows Server installations (and other operating systems, though Windows virtualization is probably driving the market today).  On top of that, they came up with a way for these virtual machines to move between physical servers without rebooting the virtual machines or even causing much of an impact on performance to the users.  Housing multiple complete virtual machines on a single host works because most Windows machines sit around waiting for something to happen pretty much all day – I mean, even with Microsoft Windows, how much does a file server really have to think about the files that it makes available on shares?  How much does a domain controller have to think in order to check if someone’s username and password are valid on the domain?  Even in relatively large systems environments, there are a considerable number of physical servers that just aren’t doing all that much most of the time.

Virtualization provides a way to share physical CPU and memory across multiple virtual machines, so you can get more utility out of each dollar you have to spend on physical server equipment.  Some organizations are therefore able to buy fewer physical servers each year.  Sorry Dell and HP – didn’t mean to rain on your bottom line, but most IT departments are trying to stretch their capital budgets further because of the recession.  Fewer servers also means less HVAC and power, both of which have increased in cost as energy markets have been deregulated and prices have started to more closely follow demand.  I guess BG&E and Pepco are also sad, but look, some of your residential customers still set the AC at 65 degrees, so just charge them three times as much and everyone is happier!

Most of the leading vendors also offer “High Availability,” which means that virtual machines can automatically be moved between hosts, and if a host fails, the supervising software can restart those virtual machines on an available host in your cluster of hosts.  For those IT people carrying blackberries that have to go to the server room at 3am to reboot physical equipment, welcome to the 21st century.

In addition, at least VMWare offers a way for virtual machines to automatically move between hosts when a particular host gets too many requests for CPU or RAM from the virtual machines running on it.  This functionality helps to improve overall performance which makes all the users happy, and quiets the help desk (a little bit).  Ok, so the users call you about something else, so the help desk is still not any less quiet, but at least you can cross one complaint off the list for the moment.

In sum, virtualization is a smart and efficient way to implement servers today.  I imagine if you work in IT that you are very likely to come into contact with virtualization soon if you have not already.  We converted about two years ago and we aren’t looking back!

Obama & Health Care IT

President Obama’s plan (published here) (the “Plan”) describes a multi-part approach to expanding the amount of health insurance available to those without insurance while attempting to reduce the costs of providing health care to Americans.  A portion of this plan involves the expansion of health information technology to help reduce the costs of administering health care.  On page 9 of the Plan, paper medical records are identified as a health care expense which can be reduced through records computerization.  The Plan cites a study by the RAND group (published here) (“RAND”) that indicates that the processing of paper claims costs twice as much as processing electronic claims.

Estimated Savings and Quality Improvements by Adoption of Health IT

The RAND group suggests that fully implemented health IT would save the nation approximately $42 billion annually, and would cost the nation’s health care system approximately $7.6 billion to implement.  RAND at 3.  According to their review of the literature on health IT adoption, approximately 20% of providers in 2005 had adopted an information system (which may have several meanings from patient reminder systems to clinical decision support).  RAND at 20-21.  Full implementation of health IT would require a substantial number of providers to convert to regular use in order for the total savings identified by RAND to be realized.  RAND estimated that in 2005 there were approximately 442,000 providers in the U.S.; this suggests that about 353,000 providers would need to convert from paper to electronic systems before the full savings to the health system would be realized.  RAND at 20.

Areas of savings in the outpatient setting noted include: transcription, chart pulls, laboratory tests, drug utilization, and radiology.  RAND at 21.  Areas of savings in the inpatient setting noted include: reduction of unproductive nursing time, laboratory testing, drug utilization, chart pulls and paper chart maintenance, and reduction of length of stay in the hospital.  RAND at 36.  Savings on the inpatient side account for approximately 2/3rds of the total savings, and the largest area of annual savings is tied to the reduction in the length of stay of patients as a result of the adoption of health IT.  Id. This overall cost savings is based on adoption of health records by virtually all health care providers in a 15 year period; the total savings to the health system during that time would total about $627 billion.  Id.

The Plan also discusses increasing the quality of health care delivered to all patients through the implementation of disease management programs (which are driven by health data of individual patients to monitor progress and outcomes), and the “realignment” of provider reimbursement with quality outcomes.  Plan at 7.  Realignment typically occurs when health insurance plans pay not for the total visits billed by a provider, but based on some kind of quality measure that tracks how well patients are doing in managing their health condition.  This is also driven by the availability of reliable health outcomes data (for example, the hemoglobin a1c test results of patients with diabetes over time, and the percentage that report a result under the “normal” or expected value).

The Trouble with Adoption of Health IT

Adopting health IT systems, however, is no small feat.  Systems have been available to the health care infrastructure for a substantial period of time (Centricity, a health information system now owned by General Electric, was originally developed by Medicalogic in the mid-80’s and became popular in the 1990s).  See Article.  In 2000, Medicalogic had penetrated the practices of about 12,000 physicians in the U.S., or around 3% of the total market, and was described then as the market leader in electronic medical records (which perhaps a total of 10% of the market had adopted a system by that time).  Using RAND’s analysis, five years passed and 20% of physicians had adopted some form of health IT.

If market penetration is to double every five years, by 2010, 40% of physicians should be using a health IT system, and by 2015, 80% should have adopted such a system.  (Admittedly, this assertion is weak because there is not sufficient data in this article to support this assertion.  In addition, adoption rates tend to follow a parabolic rather than a linear pattern, so that larger numbers of adopters join the crowd as time progresses.  But, dear reader, please feel free to comment with specifics to help improve the quality of this article!)

The New England Journal of Medicine, with a likely more restrictive definition of health IT, found that less than 13% had adopted such a system as of 2008, based on their sample of 2,758 physicians.  Article here.  An article in the Journal of Evaluation of Clinical practice reported that about 18% of the practices it surveyed (847 in total) had an electronic health record in use in 2008.  Article here.  (“JECP”)  As RAND had pointed out in its own literature search conducted in 2005, the definitions of health IT vary widely across the empirical surveys conducted, so an accurate estimate of market penetration is hard to come by.  However, it does appear that the number of practices that have adopted general health IT is not significantly higher than in 2005.

An interesting article suggested that some of the problem with health IT adoption may be regional – that some regions of the country tend to have a slower adoption rate of technology in general, which would tend to slow down the adoption of health IT in those areas.  Article here.  The JECP survey also indicated that specialty practices and smaller practices tend to be slower to adopt health IT as compared to their primary care provider counterparts.  Access to adequate capital to fund health IT purchases is an obvious reason for not implementing such systems.  Id. I would also posit that the adoption of health IT does not generally distinguish health care providers in the market of health care delivery (physicians don’t advertise that they have a health record system).  It would be interesting if patients could receive information on average health outcomes by physician when researching who they want to use for medical services (only possible if health IT is widely adopted and there is general consent to the publication of such data, which today is putting the cart before the horse).

There is, therefore, a market failure in that, if we accept that health IT reduces medical costs or improves outcomes over time, the market has not made a concerted effort to adopt this technology.  The Plan puts forward capital to help implement records and has an incentives component that rewards improved health outcomes.  Time will tell if these investments and market changes will actually reduce health care costs in the U.S.

White House Takes on Cybercrime

According to Yahoo News, President Obama plans to appoint a White House official to be in charge of coordinating the federal government’s response to cybercrime.  This comes after years of reports of identity theft, many tens of thousands of viruses aimed at security holes in mostly Microsoft operating systems like Windows 98, XP, and 2000, and increasing system security problems for infrastructure (like energy companies and utilities).  Click here for an article on hacking into the FAA air traffic control system.  Click here for a summary of attacks on the U.S. Defense Department and the U.S. electrical grid.

The problem is certainly not going away as the shadow market for hacking services is making a profit on the successful attacks of systems.  One matter not addressed today that might help improve security is the need for all information systems custodians to regularly report on security breaches.  The federal government does keep track and report on the number of attacks on federal government systems, but there is no single repository to keep track of attacks on private companies.  There is obviously no incentive for a private company to report security problems as this leads to fewer customers and could put the company out of business.  But even a single, national and anonymous reporting system would be a start to help gauge the depth of the problem.  Security problems are also a relevant consideration for consumers that might be giving data to a company to transact business, such as credit card, health, financial or other personal information.  Consumers should have the right to know about the security practices of businesses, and the effectiveness of these practices in protecting information from unauthorized use.

Furthermore, unless the market reflects the cost of security in the pricing of services, businesses will continue to operate without sufficient security in place, and our economy will continue to be at risk of being shut down by terrorists and hackers.  I suspect that this may be one of the areas where the market failure is so substantial that government intervention is justified to more seriously regulate computer security, especially in critical areas of the economy like banking, infrastructure, and the like.