News

Seeing Red

The Federal Trade Commission (FTC) promulgated regulations to help reduce consumer identity theft back in 2007, with implementation of these rules for “creditors” and national banks to begin in 2008 (and then 2009, now November 1, 2009 for certain kinds of creditors).  (See the Red Flags Rule here)

Identity theft is a real problem for people of all sorts (approximately 10 million people fall victim to this kind of fraud at a loss of around $50 billion each year).  As a result, the FTC has interpreted the term “creditor” more broadly than the kinds of businesses we tend to think of, like credit card companies.  (See FTC FAQ)  According to the FTC, a creditor includes anyone that provides a service now and accepts payment later.  Lawyers routinely do that, as do health care providers, department stores with lay-away plans, and other service professionals (except maybe your mechanic who won’t give you your car until you pay for the service).  Because of the broad application of the rule by the FTC, the lawyers decided to sue the FTC to force an interpretation of the Red Flags Rule to exclude, you guessed it, lawyers.  (See the ABA Release here)

As a practical matter, federal courts generally will defer to administrative agency interpretations of their own regulations under the Chevron doctrine.  Every so often, courts will overturn an administrative agency’s interpretation, but the odds are low.  (See Massachusetts v. E.P.A., 549 U.S. 497 (2007)).  The ABA’s odds of getting a decision in their favor are probably about average, but in any case, won’t help other kinds of professionals that accept payments from customers over time.  And for lawyers, as no decision is expected before the latest compliance deadline of November 1, 2009, we find ourselves all in the same boat of needing to comply with the Rules.

Section 681.2 requires that covered organizations (a) identify accounts periodically that may be covered accounts within the rules, (b) develop a program for identifying accounts that “is designed to detect, prevent, and mitigate identify theft,” and (c) administer the program by seeking Board approval of the policy, training staff, and monitoring the program over time to ensure that it is overseen properly.   16 C.F.R. § 681.2(c)-(e).  The program must be in writing, and must be reasonable in relation to the size of the organization implementing it.

The Appendix to section 681 provides some guidelines for covered organizations in formulating their Red Flags Program.

The Red Flag Rules also require that creditors establish a written policy that outlines how the organization will comply with the rules.  For health care providers looking for a sample policy for compliance, the AMA has published one on its web site here.  The FTC has also published a document for creditors who are probably at low risk for identity theft here, which may likely include many solo and small law firms.

Once you have appropriately assessed your risks and written a plan, the plan must be approved by the ownership of your organization.  For solo and small firm attorneys who are already chief cook and bottle washer, that means you.  Larger corporations that have a board of directors will need to take board action to approve and be involved in the organization’s compliance with its program.

The guidelines emphasize that a creditor should exercise reasonable care to protect its covered consumer accounts from theft or unauthorized access.  Implicitly, this means that a covered organization should have appropriate data security systems in place that protect the organization’s data from loss, unauthorized access, or theft.  Health care providers should already by compliant as they have been required to comply with the HIPAA security regulations since 2003.  These regulations require regular technical risk assessments, mitigation plans, access control mechanisms, and data backup plans (among other requirements in the rules – See 45 C.F.R. § 164 et seq.).

Lawyers, however, may not have had the pleasure of complying with these rules (unless of course you are a business associate to a covered entity and are now, under the ARRA, required to fully comply with the HIPAA security regulations next year that already apply to covered entities).  For example, if an attorney accepts payments for services through a web site, the attorney should evaluate the risk of identity theft from the site and take appropriate steps to mitigate those risks, such as ensuring she is using a current SSL certificate to encrypt communications with the client, not storing credit card numbers in a database that can be accessed from the internet, and appropriately maintaining the server that houses the web site to ensure it is patched for known security risks and has appropriate anti-virus software.

From there, staff will need to be trained on identifying that a consumer’s identity has been stolen, and to take appropriate actions to protect the consumer from further loss.  The FTC form also indicates that outside agencies such as a billing agency may also need to be trained (or you need to verify that that organization has its own acceptable policy for complying with the rules).  After that, the program requires an internal annual report on activities, and updating the program to address evolving threats to consumer identities.  Now that wasn’t so bad, was it?

The Quest for Meaningful Use

Section 4101 of the American Recovery and Reinvestment Act creates an incentives program for Medicare providers (and a penalty program after 2015 with regards to reimbursement) for EHR adopters.  See this article from June 2009 on “meaningful use.”  See also an earlier blog post on ARRA incentives here.

One of the provisions for receiving incentive payments is that the provider can demonstrate “meaningful use” of the EHR system.  The section also requires that this meaningful use occur on a certified EHR system.  The term “meaningful use” is not defined by the statute, except as follows: “(i) Meaningful Use of Certified EHR technology – The eligible professional demonstrates to the satisfaction of the Secretary, in accordance with subparagraph (C)(i), that during such period the professional is using certified EHR technology in a meaningful manner, which shall include the use of electronic prescribing as determined to be appropriate by the Secretary.”

The phrase is not defined by the statute, but presumably will be defined by the promulgation of a regulation by the Secretary of Health and Human Services.  The thinking today is that meaningful use would be defined by the achievement of certain milestones over time by providers using EHRs.  Initially, the focus would be on actually putting data into the system.  With time, the definition would expand to being able to look at data trends over time and evaluate this data for trends.  And eventually, providers would be required to have an actual impact on patient health outcomes.  There is likely to be a similar movement within the private insurance world for providers, as in a “pay for improved outcomes” model, moving beyond just reducing the number of times someone comes to the doctor’s office (the old, HMO model of quality).

In more practical terms, a provider that wanted to demonstrate meaningful use would need to buy some software, take it out of the box, and actually use it to put some kind of data into it.  Most likely, a more sophisticated system purchaser would give some thought to how that data would be organized within the computer system, with the goal of being able to get it back out again on demand.  In the paper health record world, this is comparable to having a paper note to document the visit, and a separate flowsheet that is maintained to track certain kinds of lab results over time.  The flowsheet is the manually created output which ultimately can be used to evaluate patient outcomes to treatment.  For example, an HIV patient is routinely checked for his or her HIV viral load.  A lower number (or an undetectable viral load count) is better than a higher one.  HIV care providers also keep track of the number of CD4 cells in a given blood sample: a higher CD4 count is better than a lower one.  Over time, these two values are related to each other, and also predict if a patient is doing better or worse with the disease.

An observant provider would educate the patient about these lab results and their implications for health, and demonstrate how close adherence to the schedule for taking HIV medications helps improve the patient’s health over time.  An HIV provider would also be watching for unexpected changes in these values to determine if the patient should be evaluated for resistance of the disease to the current regimen.  HIV is an expensive and high risk disease to manage; but it only gets more expensive if the patient’s condition is not managed appropriately (with lengthy hospital stays, complications and other health issues).  In addition, a patient’s quality of life goes down the tubes with the progression of the illness; usually the side effects of the medications to treat the illness are the lesser evil.

An EHR can help to improve the efficiency of this quality and management process for providers.  A well-designed and implemented system will place relevant lab values onto an electronic flowsheet which can be charted and analyzed over time, avoiding the time spent updating the paper forms and reducing errors in data entry.  In addition, an EHR can present multiple views to the data depending on the patient’s health condition, and can help manage care to accepted standards by reminding providers of tests or actions that are due (such as annual pap smears, 10 year tetanus boosters, quarterly viral load testing, STD screenings, etc.)  EHR’s can also cut down on duplicate tests being ordered (at least within a practice that uses the system) if a patient is seen by more than one provider over time, as all have access to the same information in the same format.

While not yet fully defined, meaningful use will likely lead our nation to more defined care standards, with incentives (and potentially penalties) for better outcomes.  But a word of caution – patients ultimately have to make decisions about their own health.  Not everyone is convinced that having a BMI over 30 is bad enough to warrant exercising an hour every day and cutting calorie intake by 25-50%.  Or consider smoking, which leads to a fair amount of bad health outcomes over time, yet how many Americans still smoke?  Penalizing physicians for the stupid choices that patients make is not fair, even though the health outcomes for these patients will be worse than if the patient had listened to their physician.  Expect the definition for meaningful use to be published soon, but also expect changes over time, particularly on standards for health outcomes.

Facebook and Twitter: Implications for Your Business?

Technology presents us with new opportunities and challenges on a regular basis.  Social networks and other “web 2.0” applications are starting to make inroads into the mainstream of the internet (ask how many of your iPhone-using friends have apps for one or both of these to measure the reality of the hype).  As a result, staff at your business are bringing their internet usage habits into the workplace.  Prospective customers are looking for you through these tools.  And business owners may want to consider the implications for their organizations.

IT departments at most organizations have struggled with having an effective internet usage policy for staff with internet access.  The difficulty has been in balancing the security of the network from viruses and other security threats against the need of users to access internet resources for business purposes.  The rise of google as a synonym for searching the web has increased the overall utilization of the internet as a business research tool.  Trying to keep inappropriate content from appearing in search results poses a real challenge for IT departments.

In addition, with the advent of more sophisticated attacks from web sites, IT departments have struggled to block phishing and other infectious sites and patch their organization’s computers to be resistant to attacks from the internet.  Facebook and twitter have both been used by malicious users to launch attacks on users of these sites (either by writing malicious applications and publishing them on facebook, or by posting malicious links in twitter postings).  The unfortunate knee-jerk reaction of most IT departments is to simply block these sites at the corporate firewall, preventing staff from having any access to these internet resources.

The typical rationale has been that these are not work-related sites, and staff are just wasting time using them on the clock, therefore, shutting down access to them at work is perfectly reasonable.  But, that rationale may no longer work as the web 2.0 world begins to take shape.  For one thing, more businesses are establishing fan pages on facebook in order to advertise their services and provide information to their customers.  Innovative businesses also may develop applications for facebook that are both popular and help to advertise the services offered by the organization.  Businesses also use twitter to keep customers in the loop on activities and events of the company, or monitor twitter to evaluate how its own advertising campaign may be progressing in reaching certain demographics.

Web 2.0 technologies are becoming more pervasive on the internet, which also increases the minimum skill sets of staff working for organizations that use web technologies to reach customers.  Blocking these technologies from the corporate network may result in a less-skilled workforce.  And, ultimately, according to Gartner, such efforts are futile and bound to fail because of the pervasive nature of these technologies.  (See CNET article)

It would seem that liberalization of internet use policies at companies, then, is an inevitable result.  And with that increased access comes new responsibilities for staff and businesses.   A landlord sued a former tenant for defamation earlier this year as a result of some tweets by the tenant about mold in her apartment.  (See article here)  Twitter itself is a rather informal medium for posting information online – similar to having an instant message chat in the chat rooms of yesteryear (which seem so quaint today).  And because it streams posts real time, you may say something that you later regret.  Imagine, for example, that your business allows access to twitter, and one of your employees angrily posts a series of defamatory tweets about a competitor or vendor.  Your organization may be slapped with a lawsuit if that competitor is monitoring twitter for tweets mentioning it by name.

Facebook represents similar challenges for organizations, especially where employees may blur the line between their social lives and work lives by forming, for example, groups on facebook of other employees.  Suppose a group of employees creates a group for only certain kinds of employees from your organization, and intentionally excludes others (perhaps on the basis of gender or age).  Is your organization discriminating against the excluded group?  Does your organization have liability for the acts of your employees in forming the exclusive group?

The web can also present a trade secret leak for those of you that have proprietary information or processes that are used by your business to generate revenue.  Social media also present challenges for protecting intellectual property, and avoiding infringement claims by others (tarnishment of famous marks on twitter – I’m sure a case is brewing as I type this story).

These questions are unanswered.  And I don’t offer these hypotheticals to scare your organization into shutting down the internet connection at the office.  My point is to encourage your organization to think about your policies related to internet usage and what constitutes acceptable use of the internet during normal work hours.  Establishing an effective policy, and consistently enforcing that policy with your staff goes a long way to managing your exposure to a law suit.  Controlling the internet at the organization’s firewall is unlikely to be a sufficient risk management tool.

There are a number of good starting points for a good internet usage policy for organizations.  Here are some principles to consider when drafting yours:

  1. Empower staff to be responsible for their internet usage.
  2. Disrespectful communication is not acceptable, whatever the medium of communication.
  3. Do not download and install software from the internet that is not approved by your IT staff.
  4. Use the internet for professional reasons.
  5. Be mindful that staff representations online reflect on the reputation of their employer.
  6. There are real-world consequences for staff that abuse access to the internet.

If your organization uses facebook or twitter today to market itself, re-enforce with your staff that organizational posts should be approved prior to posting on the web.  The immediacy of these services should be resisted by staff in order to ensure a consistent and accurate message is communicated to the outside world.

Lost Data in the Cloud: How Sad

The headlines are ablaze because somebody over at the company, Danger, upgraded a storage array without making a backup, and voila – bye bye T-Mobile contact data.  (See the article on The Washington Post here)  Nik Cubrilovic’s point in his article is that data has a natural lifecycle, and you should be able to survive without your contacts on your phone.  But he also makes the point that all sysadmins have memories of not being able to recover some data at some point, and sweating out bullets as a result.  His commentary is: this stuff is hardly as reliable as we expect it to be.  “Cloud” computers are no different, except that they are generally managed by professionals that increase the odds of successful recovery as compared to the basement enthusiasts.

Having a backup plan is important.  Testing your backups periodically is important.  But generally, the rule is that the most important data gets the most attention.  If you have to make a choice between backing up your T-Mobile contacts and your patient’s health records, the latter probably will get more attention.  That’s in part because there are laws that require more attention to the latter.  But it is also because you probably won’t die if you can’t call your aunt Susan without first emailing your mom for her number.  You can die if your doctor unknowingly prescribes you a medication that interacts with something not in your chart because of data loss.

But the bottom line with this: data loss is inevitable.  There is a tremendous amount of data being stored today by individuals and businesses.  Even the very largest and most sophisticated technology businesses on Earth have had recent data losses that made the headlines.  But the odds of data loss by doing nothing about backups are still higher than if you at use a cloud service.  Oh, and if you use an iPhone with MobileMe, it synchs your contacts between your iPhone and your computer and Apple’s http://www.me.com, so you actually have three copies of your contacts floating around, not just a copy on the “cloud.”  Maybe you T-Mobile people aren’t better off by “sticking together.”

Big Dilemmas for Web Security

The federal government is getting into the fray over internet security in a national crisis.  (See Yahoo Article here)  A Senate committee considered and then promptly dropped language in a cybercrime bill that would have authorized the President to shut down internet traffic to compromised web sites.  This comes in the larger context of trying to set policy on technical security for the nation, in light of our increasing dependency on the framework created by the internet.  Assuming that the shutdown of a web site was technically feasible, a war-time President would likely have the authority to do so, whether Congress passed a law about it or not.  See U.S. Const. Art. II Sect. 2 cl. 1.  As a practical matter, if the President could allow for the rounding up of U.S. citizens during World War II solely because of their race, I think the President can safely assume that shutting down a web site would be constitutional.  See Korematsu.

The difficulty today, however, is that following 9/11, President Bush asserted that we are constantly at war with terrorists.  Unlike a more traditional notion of war which has a relatively clear start and end, defining war in this manner means that the President is constantly acting within his war powers.  I don’t think the founders of our nation intended for us to have a king, or contemplated that we would be in a constant state of war.  And the danger is that the President would exercise the power to shut down certain web sites deemed a security risk, without much recourse for the web site owner.  So sites that might have an infection could be shut down, but so could those that disagreed with Presidential policies.

The risk to our internet infrastructure is real.  The authors of computer viruses today have come a long way from the kids of the 1990’s that were trying to annoy you.  Major web sites like yahoo.com and malicious ads on Google’s AdWords have been infected with viruses that would then attack users of that web site, potentially infecting many millions of computers.  Our ability to effectively respond to such problems is directly related to how well we prepare for their realization.  Perhaps instead of delegating such broad authority to the President, we should instead work on delegating power to act under more specific circumstances which would better balance the free speech rights of web site operators against the technical security needs of the nation.

Chapter 3: Activate Me

A cornerstone of security for most computer systems is the user account.  The user account is a way of defining what each human being on the system can do with (or to) the system.  Universally, systems are designed with a user hierarchy in mind: users at the bottom rungs of polite computer society may be able to log in and look at a few things, but not make any changes or see anything particularly sensitive.  Those at the top may exercise complete control over core system functions or services.   The two basic tenets of a security plan are: (1) give each user the least amount of privileges on the computer system as practical for that person’s function, and (2) limit the number of user accounts that have complete access and assign these to the trusted few in the organization.

These principles have a corollary consequence – the IT department is typically the organizational unit that controls privileges for new staff that join the organization.  The process to do this is relatively straightforward: the hiring supervisor completes a form online that notifies the IT department of a new user account to be created.  The actual technical process to establish a new account is relatively lengthy due to the ever-increasing number of systems and applications that require a password.  Not surprisingly, our user community is made unhappy when a new user account doesn’t work “out of the box.”  This problem culminated in a meeting of some of the unhappy users with me, the purpose of which I think was as much to remind me of where my bread was buttered as it was to seek a better way to activate new accounts.

Before a process can be improved, one must understand the steps involved in it.  Process improvement also requires that data be collected on the frequency of the problem in order to be able to measure improvements with changes to the process.  But in this case, the real problem was a more general frustration with the technology and the sense that the technology department had the wrong priorities, or at least a list of priorities that was at variance with what this group of users thought should be the department’s priorities.

So what do you do?  For one thing, having enough notice of a new user account helps to ensure that the account is created timely.  Having time to setup the account also would allow time for IT to test the account to make sure that it works before turning it over to the user.  As we discovered, having a written checklist of the process also helps to cut down errors (especially if the administrator is interrupted while activating the account, which surely never happens elsewhere).  There are also technology solutions to managing accounts across multiple information systems (for example, by using some kind of single sign-on technology that stores the account information of the other systems within the SSO system).  These solutions typically cache subordinate system passwords and pass them to those systems when demanded so that the user need only remember the primary account password (such as their Active Directory login).

We also implemented a feedback process so that a new user (or their supervisor) could provide feedback to the IT department on problems with the account.  This information can be used for training or for process improvement, particularly where there are trends evident in the errors over time.  The problem with this process was that the number of errors reported was relatively small over time, and the fact is that you will not ever have a zero error rate with any process, no matter how much attention you put on it.  However, if you activated thousands of accounts each year, the data collected would be more useful to you.

All of these tools only work when there is a good relationship between the users requesting accounts and the IT staff that create them.  And for IT managers, this may be the underlying issue that causes the actual tension in the room.

One way to improve user relations is to regularly talk with them to understand the issues and to get feedback on the IT department.  This goes beyond an annual user survey and requires an IT manager’s attendance at meetings with users.  In addition, having avenues to communicate with the user community when there are system issues is important.  Finally, advertising the efforts of the IT department to improve processes with the most complaints can help improve how users feel about the department’s services and staff.  Whenever you can, take the complaint as an opportunity to improve relations with your customers and advertise your success at resolving it.

Ant CyberSecurity

Ants in one’s kitchen can be a pest (and a difficult one to resolve once the ants have found something good to eat), but ants may have a more constructive future annoying cyberthreats in digital form.  Haack, Fink, et al. have written a paper on using ant techniques for monitoring and responding to technical security threats on computer networks.  As they point out, computer system networks continue to become more complex and challenging to secure.  System administrators flinch at the thought of adding a new printer, fax machine or other device because of the increase in monitoring and administrative tasks.  This problem is actually getting worse as more devices gain independent IP addresses on networks, like copiers and refrigerators.

Securing these devices poses a monumental task for IT staff.  Haack, Fink et al. have proposed an alternate security method based on the behavior of ants in colonies.  In their paper, Mixed-Initiative Cyber Security: Putting humans in the right loop, the authors have described at a high level how semi-autonomous bits of code might work in concert to respond appropriately to threats to minimize the amount of human intervention required to address an issue.  From a system administrator’s perspective, there are a lot of balls that must be juggled to keep a network secure.  Most relatively complex IP devices generate logs that require regular review.  In addition, some devices and software will also send email or text alerts based on certain conditions.  Windows-based computers have a suite of patching and anti-virus/anti-spyware software that require monitoring and may require review.  Internal network devices of a minimum complexity will log activity and output alerts.  Border control devices (firewalls) can be very noisy as they attempt to repel attacks and unwanted network traffic from outside of the secure network.  Printers create logs and alerts.  And the list can go on and on as you begin to examine particular software systems (such as database and mail servers).  A lot can go wrong.

Haack, Fink et al. propose a multi-tiered approach to tackling these security issues.  The lowest level agents are “sensors” that correlate with ants in real life.  Sensors roam from device to device searching for problems and reporting these problems to other sensors and sentinels.  For example, you could write a sensor whose only interest is finding network activity on a particular UDP or TCP port above a certain, pre-defined threshold on a device.  The sensor would report back to its boss, a “sentinel,” that computer a had an unusual amount of network activity on that port.

Sentinels are in charge of security for a host or group of similarly configured hosts (for example, all Windows file servers, or all Windows XP Professional workstations in a domain).  Sentinels interact with sensors and are also charged with implementing organizational security policy as defined by the humans at the top of control hierarchy.  For example, a policy might be drafted that requires that all Windows XP workstations shall have a particular TCP port closed.  Sentinels would be taught how to configure their hosts to close that particular inbound TCP port (for example, by executing script that enables TCP filtering on all of the Windows XP workstations’ network adapters, or configuring a local software firewall).

Sentinels learn about problems from sensors that come to visit the sentinel.  Sentinels can also reward sensors that provide useful information, which in turn encourages more sensors to visit the sentinel (as foraging ants lay down path information that can be read by other ants).  Sensors like to be patted on the head by the sentinel by design, so doing so enough leads more sensors to stop by for their pat.  Of course, if the sensor has nothing interesting to report, no pat on the head.  Sensors that rarely have useful information get taken out of service or self-terminate, while rewarded sensors are used by the sentinels to create more copies.  So, if computer problems are like sugar, the ants (sensors) that are the best at finding the sugar are reproduced.

In a computer network, if a known hole is identified in the configuration of a Windows workstation, sensors that are designed to find that hole will be rewarded at the expense of those that are looking for an older problem that has already been patched.  The security response to new and evolving problems should therefore modify itself over time as new problems are identified and passed along down the hierarchy to the sensors.

Haack, Fink et al. also discuss the role of sergeant and supervisor (apparently appreciating the alliterative value of having all the roles in the paper start with the letter “S” – who says that computer scientists don’t have a sense of humor?).  The sergeant is the coordinator of the sentinels for an organization, and provides information graphically to the supervisor (the human beings that manage the security system).  The sergeant is the implementer of organizational policies set by the supervisors (all workstations will have a firewall enabled; the most recent anti-virus definitions will be applied within 1 day of being released by the vendor).

From the paper, I presumed that the sentinels actually carry out changes to host devices when they realize there is a host that is not aligned with an organizational policy based on information from sensors.  However, this is not discussed in detail.  The authors suggest this in section 3.3 with the reference to sentinels being responsible for gathering information from sensors and “devis[ing] potential solutions” to problems identified.  My guess is that tool kits would be written ahead of time by the system developer for implementing certain solutions from identified problems (for example, how to download and apply a recent anti-virus definition file for a particular desktop operating system).

The authors also envision that the sergeant might be granted authority to acquire external resources automatically without seeking prior approval from its human supervisors, at least for certain maximum expenditures.  For example, had the anti-virus subscription for definitions expired, a supervisor might grant the sergeant the authority to renew that subscription so long as it cost less than $x to do so.  A growing number of software makers have designed subscription services for software updates, many of which cost a set amount per month or year.  Most organizations that use these services would budget to pay for the service each year, so automatically authorizing such expenses might make sense.  This would also avoid lapses in security coverage that occur today in plenty of organizations that do not have adequate controls over renewal services.

The authors discuss the issue of cross-organizational security in section 1 of their paper, indicating that “coordinated cyber defensive action that spans organizational boundaries is difficult” for both legal and practical considerations.  However, the proposed security response described by the authors could be improved if there was a way to securely share information with other organizations operating an “ant” cybersecurity system.  Sharing either active or popular threats, or tool kits for quickly responding to specific threats might help to improve an organization’s overall security response, and the larger value of the draft security system.

Information security continues to be a significant issue for most organizations that have increasingly complex information systems.  Establishing policies and implementing these policies represents a significant time investment, which many organizations cannot afford to make.  The more that security mechanisms can be automated to reduce risk, the greater the value to organizations especially where qualified information security experts are unavailable or too expensive for an organization’s security plan.  I’m interested to see where this concept goes in the future, and whether criminals will begin to design security threats to infect the security system itself (as they have in the past for Symantec’s anti-virus software).

Rescuecom v. Google: Trademark Infringement

In a recent post on Google’s use of trademark words as advertising search terms, one of the questions I raised was whether Google was really any different than a newspaper that might run an ad that was infringing on a third party’s trademark.  Rescuecom, Corp. v. Google, Inc. may help to provide some insight into this question.

In Rescuecom, the Court of Appeals for the Second Circuit was asked to determine if the plaintiff had alleged a cause of action for trademark infringement against Google.  Rescuecom alleged that some of its competitors were purchasing its trademark as a keyword that would trigger the competitor’s ad when a google user searched with Rescuecom’s trademark.  For fun, I searched with this trademark but got no advertisements on google or yahoo.  I guess that the competitors got shut down by all of the litigation that was ongoing in this matter.

In any event, the Court goes on to discuss whether google’s practice of offering trademarks for auction to competitors could be infringement by google.  The Court noodle’s through the complaint in this manner: (a) most of google’s revenue comes from advertising, (b) google has a financial stake in the effectiveness of the advertisements it runs for advertisers, (c) trademarks of well known companies have value as keywords for advertisers, therefore, google can be liable for trademark infringement if the trademark is “used in commerce” as that is defined in section 1127 of the Lanham Act, and the use is likely to cause confusion on the part of users of google’s search engine when those users are presented with advertisements from competitors to the trademark holder.

The Court does not fully explain the logic in (b) above.  For those that use google’s AdWords, an advertiser can create an ad and determine when that ad will display on google’s web site based on keywords that are used by google users to search for web content.  For example, if you were looking for Starbucks and used that trademark as a search term in google, google will return web pages it thinks are relevant to that result, along with advertisements that are tied to that search term.  Advertisers can place a bid on the maximum amount they will pay for a click on their ad.  So, if there were multiple advertisers with “Starbucks” as their keyword, the one with the highest bid would display at the top of the list of advertisements (usually in the right hand column, but google also has ads at the top of the search results listing on the left periodically).  Clicking on an advertisement will take the user to the advertiser’s web page, which will not necessarily be Starbucks’ home page.

Google also benefits from a competitor outbidding the trademark holder per click through the AdWords system.  So if Starbucks and I both decided to run an ad in AdWords based on the trademark “Starbucks,” I could force Starbucks to increase the amount it would pay per click by automatically increasing my bid for the same keyword.  Google, in fact, has a tool to allow bids to automatically change based on the market for a keyword.

Consumers, I suppose, could be confused by this.  Rescuecom alleged that consumers were likely to be confused (and they were losing business to their competitors as a result) by the ads because they display in a “manner which would [not] clearly identify them as purchased ads…”  Rescuecom, at *6.  And the competitors that were using Rescuecom’s mark were in the same business as Rescuecom, and I seriously doubt they would put a link at the top of their page that would say “Looking for Rescuecom Corp?  Here is a link to their page.”  I suppose if I wished to sell my “Karbucks” brand coffee, the easiest way to advertise it would be to buy the trademark “Starbucks” on AdWords, and just have a very large marketing budget to bid on that keyword.  Maybe that is the reason that Starbucks coffee is around $12-$15 per pound – the AdWords bids keep going up because of competing trademark infringers!

I suppose that google could use the existing trademark database, TESS, which is maintained by the US Patent and Trademark Office, and simply prevent anyone (or anyone other than the current trademark holder) from bidding on a particular registered and active trademark.  Factually, there is also a question of whether much of google’s advertising money comes from the mis-use of trademarks.  For example, I do have the word Starbucks in an ad that I have run, but the link in my ad takes you to an article on my blog about another trademark infringement case against a Starbucks competitor.  I’m not selling coffee on my blog (at least not yet), so no danger of infringement.  When I originally created the ad, there was a lag before approval of it, where I presume that google at least had the opportunity to check to see if I was trying to infringe Starbucks’ mark.  Stay tuned!

Google AdWords and Trademarks as Search Terms

Google has been sued by a number of trademark holders on the grounds that competitors can establish an adwords account with Google and bid on trademarks as search terms that users of the search engine may use to search.  The consequence is that a high bidder on a trademark search term may see ads from AdWords paid for by competitors of the trademark holder.

Suit was brought to the EU.  A ruling is expected next year.  However, a senior judge on the EU Court reviewing the complaint, offering a non-binding assessment of the situation, stated that Google’s AdWords service probably falls within an “information society services” exemption for trademark infringement, so long as Google remains “neutral” about the information that it provides to search users.  (See the Article on Yahoo News here)

AdWords itself works by allowing advertisers to bid on particular keyword search terms.  A high bidder’s display ad will appear next to google search results, (sometimes at the top, but more often in the right hand column on the search results list).  For example, a google search for “Louis Vuitton” returns the official web site of the corporation as the first result in the left hand column of results.  However, advertisers selling knock-offs and fakes appear in the right hand column, taking users off to unapproved web sites offering products that compete (mostly on price) with the trademarked item searched for by the user.  Eluxuryin.com was one of the advertisers when I searched today; they appear to be offering “Louis Vuitton” products at 60-90% off the list price.  According to louisvuitton.com, LV offers their products for sale exclusively through their web site, and through eluxury.com.  (See the information here on the LV website, FAQ | Questions About Louis Vuitton)  Eluxuryin.com does not appear to be a legitimate LV reseller, and they are not the only advertiser that pops up on a search for the trademark “louis vuitton.”

Trademark law in the U.S. is governed by the Lanham Act, codified in 15 U.S.C. 1051 et seq.  Sections 1114 and 1125 are the typical basis for bringing a trademark infringement claim against an alleged infringer of one’s trademark.  Section 1114(1)(a) requires that an infringer use a registered mark in commerce in connection with the sale or advertising of a good that is likely to cause confusion on the part of the prospective buyer of the thing advertised.  Louis Vuitton has a registered word mark, (registration number 2904197), filed in 2003 and registered in late 2004 in the U.S. that it uses worldwide to indicate that LV is the maker of the thing to which the trade mark is affixed.  Another party offering for sale a bag identified as “Louis Vuitton” without permission of the LV is likely an infringer of LV’s trade mark.

The question is whether Google could also be a trademark infringer by selling ad placement services to the infringing advertiser.  Google does benefit from the auction of trademark terms available for advertising on its site.  To the best of my knowledge, Google does not take down advertisements that might be infringing or that would direct a search user away from the legitimate business web site of the trademark holder whose trademark is being used as a keyword.  On the other hand, Google does not write the infringing ads, either.  Arguably, Google is not selling goods with a false designation mark or causing the advertisements to be created by the actual infringer.  In this way, Google is similar to a newspaper that sells classified ad space to advertisers.  I don’t think that a newspaper would be liable for trademark infringement for running the ad of a third party that was infringing on another’s mark or might confuse customers about whose products were being advertised.  Does AdWords work that much differently to expose Google to heightened or joint liability with infringers?

Chapter 2: Stop Screwing With Me

Writes an angry user one Sunday morning at 7:48 a.m.:

“I don’t who know who’s doing the back up this morning, but who ever it was cut me off in the middle of my writing a complicated and lengthy assessment on a patient that is now lost.  I know you can tell when we’re using the [database], so why did this happen?”

The organization employs a medical record system that runs on Oracle 10.  The front end application is a visual basic application that uses ODBC to connect clients to the backend database.  Our normal business hours are Monday through Friday, 8:30 am to 9:00 pm., however users do have remote access to our systems outside of normal business hours.  We therefore implemented a server maintenance schedule that occurs Sunday mornings, knowing that some staff would still be inconvenienced by this decision, but at least most of the time, the database would be available during normal business hours.

In theory, one could ask Oracle “who’s logged in right now” and it would tell you as much as it knows (which may or may not be the whole story because of certain design aspects of the database and the application).  Of course, the basic problem is that if we asked the database this question, most of the time at least some user would be logged in because of remote access.  Consequently, we made a decision to perform cold backups of the Oracle database on Sunday mornings, which would bring down the database for about 3 hours each Sunday morning.  Upgrades, patches, and other server changes may make the database unavailable for longer periods.  We did provide notice to our user community of our maintenance schedule.

The user, however, raises two points.  First, can’t an IT department find an alternate way to backup the database that would not cause an outage of the server.  And second, why isn’t the IT department omniscient enough to know if a user has been bad or good.

To the first point, a cold backup is a reliable method of backing up an Oracle database, but it is not the only or most sophisticated method.  Oracle supports a number of methods besides cold backups, including: hot backups and RMAN-based backups.  We use cold backups because they are the simplest and most certain way to ensure the database can be recovered in the event of a system problem.  Our medical record system is the only database that we support that uses Oracle for the database engine (we also support a version of Pervasive, two versions of Microsoft’s SQL Server, mysql, and various flavors of Microsoft Access), so we are not able to retain a full time Oracle expert to administer our database.  A more sophisticated database administrator would be able to configure hot backups to run safely (which would not require the database to be down), or would be able to configure RMAN to perform backups, which is integrated into the Oracle administrative tools.

So, the technology is there, but the expertise is outside of our current capabilities.  Surprising?  Probably not.  Every database technology in the end performs a set of similar tasks – the ability to store and retrieve data in an efficient manner.  However, how this simple idea is implemented varies widely across various database engines and operating systems, and expertise has been developed around each version.  The typical corporate business IT department is unlikely to have expertise in this area in-house because of the relative cost of the resource compared to the relative utility of that resource to the organization in comparison to other priorities.  Smaller IT departments generally are made up of a number of generalists who have broad but relatively shallow knowledge about the numerous systems and components that the IT department babysits for the company.

Expertise not maintained in-house must be contracted with from an outside pool of IT experts.  However, there is no standard or certification to objectively evaluate external expertise (as there is for physicians and lawyers, both of whom must pass a state-sponsored certification exam).  In addition, many IT departments elect to maintain control via in-house staff for critical systems, even if there are more expert staff available to them.

In our case, by design, we elected to depend on the vendor of the health record system for Oracle database support.  Our approach was to call on this expert for dire emergencies.  The inconvenience of our users for Sunday morning backups seemed less than dire, hence we did not seek further advice from the support vendor on how to mitigate this inconvenience.  That meant that we would need to develop some Oracle expertise in-house to do the day-to-day maintenance on the database.  The extent of our knowledge was to use the cold backup technology to perform backups.

If the IT department were to hire a contractor who was an Oracle 10 expert to implement RMAN for backups and recovery, an internal member of the IT department would also need to be trained to operate RMAN, address errors, test the backups to see if they are recoverable, and make modifications to the RMAN configuration as a result to changes to the database (for example, as a consequence of an upgrade to the application).  Over the longer term, the initial cost to configure RMAN is the smaller cost compared to the ongoing maintenance costs of ensuring that RMAN continues to work properly post-implementation.  Additionally, the IT department itself would need to cope with staff turnover – what happens to the knowledge about RMAN when the trained internal resource leaves the organization, or is promoted?

This problem is not really avoided if the department elects to contract with the Oracle consultant for ongoing support, in the sense that in the long term, the consultant may stop providing the service, may become unavailable, or may want to be paid considerably more for his expertise than was originally bargained for.  So, either way, the total cost over the long run has to be balanced against the relative importance of implementing the service, in relation to the longer list of competing priorities for the IT department.  Given the basic kind of economic decisions made by small IT departments, inconveniencing a few users on Sunday mornings will almost always cost less than the relative expense and difficulty of a more sophisticated system.

As to the second point, users often presume that IT staff watch their every move like a bunch of voyeurs at the digital keyhole.  As technology has developed, so have the tools for monitoring user  activity.  But the truth of the matter is that we do not have enough time typically to review this activity, unless there is a problem or issue.  And in the example above, while we may have been able to detect that the user was logged in, there was no way to know if the user was reading the news on Yahoo! or typing the thirteenth page of his graduate thesis.

Could we do better?  Of course.  Having a larger budget would mitigate the decision making that IT departments engage in because of scarce resources.  As to the problem of kicking users out – we made a point of doing our best of posting notice of unanticipated outages during business hours, but there is a limit to how effective notice of regular scheduled outages will be for the hard-headed that insist on working on complicated matters in the middle of our backup schedule.  And you just can’t make everyone happy.