Lost Data in the Cloud: How Sad

The headlines are ablaze because somebody over at the company, Danger, upgraded a storage array without making a backup, and voila – bye bye T-Mobile contact data.  (See the article on The Washington Post here)  Nik Cubrilovic’s point in his article is that data has a natural lifecycle, and you should be able to survive without your contacts on your phone.  But he also makes the point that all sysadmins have memories of not being able to recover some data at some point, and sweating out bullets as a result.  His commentary is: this stuff is hardly as reliable as we expect it to be.  “Cloud” computers are no different, except that they are generally managed by professionals that increase the odds of successful recovery as compared to the basement enthusiasts.

Having a backup plan is important.  Testing your backups periodically is important.  But generally, the rule is that the most important data gets the most attention.  If you have to make a choice between backing up your T-Mobile contacts and your patient’s health records, the latter probably will get more attention.  That’s in part because there are laws that require more attention to the latter.  But it is also because you probably won’t die if you can’t call your aunt Susan without first emailing your mom for her number.  You can die if your doctor unknowingly prescribes you a medication that interacts with something not in your chart because of data loss.

But the bottom line with this: data loss is inevitable.  There is a tremendous amount of data being stored today by individuals and businesses.  Even the very largest and most sophisticated technology businesses on Earth have had recent data losses that made the headlines.  But the odds of data loss by doing nothing about backups are still higher than if you at use a cloud service.  Oh, and if you use an iPhone with MobileMe, it synchs your contacts between your iPhone and your computer and Apple’s http://www.me.com, so you actually have three copies of your contacts floating around, not just a copy on the “cloud.”  Maybe you T-Mobile people aren’t better off by “sticking together.”

Chapter 3: Activate Me

A cornerstone of security for most computer systems is the user account.  The user account is a way of defining what each human being on the system can do with (or to) the system.  Universally, systems are designed with a user hierarchy in mind: users at the bottom rungs of polite computer society may be able to log in and look at a few things, but not make any changes or see anything particularly sensitive.  Those at the top may exercise complete control over core system functions or services.   The two basic tenets of a security plan are: (1) give each user the least amount of privileges on the computer system as practical for that person’s function, and (2) limit the number of user accounts that have complete access and assign these to the trusted few in the organization.

These principles have a corollary consequence – the IT department is typically the organizational unit that controls privileges for new staff that join the organization.  The process to do this is relatively straightforward: the hiring supervisor completes a form online that notifies the IT department of a new user account to be created.  The actual technical process to establish a new account is relatively lengthy due to the ever-increasing number of systems and applications that require a password.  Not surprisingly, our user community is made unhappy when a new user account doesn’t work “out of the box.”  This problem culminated in a meeting of some of the unhappy users with me, the purpose of which I think was as much to remind me of where my bread was buttered as it was to seek a better way to activate new accounts.

Before a process can be improved, one must understand the steps involved in it.  Process improvement also requires that data be collected on the frequency of the problem in order to be able to measure improvements with changes to the process.  But in this case, the real problem was a more general frustration with the technology and the sense that the technology department had the wrong priorities, or at least a list of priorities that was at variance with what this group of users thought should be the department’s priorities.

So what do you do?  For one thing, having enough notice of a new user account helps to ensure that the account is created timely.  Having time to setup the account also would allow time for IT to test the account to make sure that it works before turning it over to the user.  As we discovered, having a written checklist of the process also helps to cut down errors (especially if the administrator is interrupted while activating the account, which surely never happens elsewhere).  There are also technology solutions to managing accounts across multiple information systems (for example, by using some kind of single sign-on technology that stores the account information of the other systems within the SSO system).  These solutions typically cache subordinate system passwords and pass them to those systems when demanded so that the user need only remember the primary account password (such as their Active Directory login).

We also implemented a feedback process so that a new user (or their supervisor) could provide feedback to the IT department on problems with the account.  This information can be used for training or for process improvement, particularly where there are trends evident in the errors over time.  The problem with this process was that the number of errors reported was relatively small over time, and the fact is that you will not ever have a zero error rate with any process, no matter how much attention you put on it.  However, if you activated thousands of accounts each year, the data collected would be more useful to you.

All of these tools only work when there is a good relationship between the users requesting accounts and the IT staff that create them.  And for IT managers, this may be the underlying issue that causes the actual tension in the room.

One way to improve user relations is to regularly talk with them to understand the issues and to get feedback on the IT department.  This goes beyond an annual user survey and requires an IT manager’s attendance at meetings with users.  In addition, having avenues to communicate with the user community when there are system issues is important.  Finally, advertising the efforts of the IT department to improve processes with the most complaints can help improve how users feel about the department’s services and staff.  Whenever you can, take the complaint as an opportunity to improve relations with your customers and advertise your success at resolving it.

Ant CyberSecurity

Ants in one’s kitchen can be a pest (and a difficult one to resolve once the ants have found something good to eat), but ants may have a more constructive future annoying cyberthreats in digital form.  Haack, Fink, et al. have written a paper on using ant techniques for monitoring and responding to technical security threats on computer networks.  As they point out, computer system networks continue to become more complex and challenging to secure.  System administrators flinch at the thought of adding a new printer, fax machine or other device because of the increase in monitoring and administrative tasks.  This problem is actually getting worse as more devices gain independent IP addresses on networks, like copiers and refrigerators.

Securing these devices poses a monumental task for IT staff.  Haack, Fink et al. have proposed an alternate security method based on the behavior of ants in colonies.  In their paper, Mixed-Initiative Cyber Security: Putting humans in the right loop, the authors have described at a high level how semi-autonomous bits of code might work in concert to respond appropriately to threats to minimize the amount of human intervention required to address an issue.  From a system administrator’s perspective, there are a lot of balls that must be juggled to keep a network secure.  Most relatively complex IP devices generate logs that require regular review.  In addition, some devices and software will also send email or text alerts based on certain conditions.  Windows-based computers have a suite of patching and anti-virus/anti-spyware software that require monitoring and may require review.  Internal network devices of a minimum complexity will log activity and output alerts.  Border control devices (firewalls) can be very noisy as they attempt to repel attacks and unwanted network traffic from outside of the secure network.  Printers create logs and alerts.  And the list can go on and on as you begin to examine particular software systems (such as database and mail servers).  A lot can go wrong.

Haack, Fink et al. propose a multi-tiered approach to tackling these security issues.  The lowest level agents are “sensors” that correlate with ants in real life.  Sensors roam from device to device searching for problems and reporting these problems to other sensors and sentinels.  For example, you could write a sensor whose only interest is finding network activity on a particular UDP or TCP port above a certain, pre-defined threshold on a device.  The sensor would report back to its boss, a “sentinel,” that computer a had an unusual amount of network activity on that port.

Sentinels are in charge of security for a host or group of similarly configured hosts (for example, all Windows file servers, or all Windows XP Professional workstations in a domain).  Sentinels interact with sensors and are also charged with implementing organizational security policy as defined by the humans at the top of control hierarchy.  For example, a policy might be drafted that requires that all Windows XP workstations shall have a particular TCP port closed.  Sentinels would be taught how to configure their hosts to close that particular inbound TCP port (for example, by executing script that enables TCP filtering on all of the Windows XP workstations’ network adapters, or configuring a local software firewall).

Sentinels learn about problems from sensors that come to visit the sentinel.  Sentinels can also reward sensors that provide useful information, which in turn encourages more sensors to visit the sentinel (as foraging ants lay down path information that can be read by other ants).  Sensors like to be patted on the head by the sentinel by design, so doing so enough leads more sensors to stop by for their pat.  Of course, if the sensor has nothing interesting to report, no pat on the head.  Sensors that rarely have useful information get taken out of service or self-terminate, while rewarded sensors are used by the sentinels to create more copies.  So, if computer problems are like sugar, the ants (sensors) that are the best at finding the sugar are reproduced.

In a computer network, if a known hole is identified in the configuration of a Windows workstation, sensors that are designed to find that hole will be rewarded at the expense of those that are looking for an older problem that has already been patched.  The security response to new and evolving problems should therefore modify itself over time as new problems are identified and passed along down the hierarchy to the sensors.

Haack, Fink et al. also discuss the role of sergeant and supervisor (apparently appreciating the alliterative value of having all the roles in the paper start with the letter “S” – who says that computer scientists don’t have a sense of humor?).  The sergeant is the coordinator of the sentinels for an organization, and provides information graphically to the supervisor (the human beings that manage the security system).  The sergeant is the implementer of organizational policies set by the supervisors (all workstations will have a firewall enabled; the most recent anti-virus definitions will be applied within 1 day of being released by the vendor).

From the paper, I presumed that the sentinels actually carry out changes to host devices when they realize there is a host that is not aligned with an organizational policy based on information from sensors.  However, this is not discussed in detail.  The authors suggest this in section 3.3 with the reference to sentinels being responsible for gathering information from sensors and “devis[ing] potential solutions” to problems identified.  My guess is that tool kits would be written ahead of time by the system developer for implementing certain solutions from identified problems (for example, how to download and apply a recent anti-virus definition file for a particular desktop operating system).

The authors also envision that the sergeant might be granted authority to acquire external resources automatically without seeking prior approval from its human supervisors, at least for certain maximum expenditures.  For example, had the anti-virus subscription for definitions expired, a supervisor might grant the sergeant the authority to renew that subscription so long as it cost less than $x to do so.  A growing number of software makers have designed subscription services for software updates, many of which cost a set amount per month or year.  Most organizations that use these services would budget to pay for the service each year, so automatically authorizing such expenses might make sense.  This would also avoid lapses in security coverage that occur today in plenty of organizations that do not have adequate controls over renewal services.

The authors discuss the issue of cross-organizational security in section 1 of their paper, indicating that “coordinated cyber defensive action that spans organizational boundaries is difficult” for both legal and practical considerations.  However, the proposed security response described by the authors could be improved if there was a way to securely share information with other organizations operating an “ant” cybersecurity system.  Sharing either active or popular threats, or tool kits for quickly responding to specific threats might help to improve an organization’s overall security response, and the larger value of the draft security system.

Information security continues to be a significant issue for most organizations that have increasingly complex information systems.  Establishing policies and implementing these policies represents a significant time investment, which many organizations cannot afford to make.  The more that security mechanisms can be automated to reduce risk, the greater the value to organizations especially where qualified information security experts are unavailable or too expensive for an organization’s security plan.  I’m interested to see where this concept goes in the future, and whether criminals will begin to design security threats to infect the security system itself (as they have in the past for Symantec’s anti-virus software).

Chapter 2: Stop Screwing With Me

Writes an angry user one Sunday morning at 7:48 a.m.:

“I don’t who know who’s doing the back up this morning, but who ever it was cut me off in the middle of my writing a complicated and lengthy assessment on a patient that is now lost.  I know you can tell when we’re using the [database], so why did this happen?”

The organization employs a medical record system that runs on Oracle 10.  The front end application is a visual basic application that uses ODBC to connect clients to the backend database.  Our normal business hours are Monday through Friday, 8:30 am to 9:00 pm., however users do have remote access to our systems outside of normal business hours.  We therefore implemented a server maintenance schedule that occurs Sunday mornings, knowing that some staff would still be inconvenienced by this decision, but at least most of the time, the database would be available during normal business hours.

In theory, one could ask Oracle “who’s logged in right now” and it would tell you as much as it knows (which may or may not be the whole story because of certain design aspects of the database and the application).  Of course, the basic problem is that if we asked the database this question, most of the time at least some user would be logged in because of remote access.  Consequently, we made a decision to perform cold backups of the Oracle database on Sunday mornings, which would bring down the database for about 3 hours each Sunday morning.  Upgrades, patches, and other server changes may make the database unavailable for longer periods.  We did provide notice to our user community of our maintenance schedule.

The user, however, raises two points.  First, can’t an IT department find an alternate way to backup the database that would not cause an outage of the server.  And second, why isn’t the IT department omniscient enough to know if a user has been bad or good.

To the first point, a cold backup is a reliable method of backing up an Oracle database, but it is not the only or most sophisticated method.  Oracle supports a number of methods besides cold backups, including: hot backups and RMAN-based backups.  We use cold backups because they are the simplest and most certain way to ensure the database can be recovered in the event of a system problem.  Our medical record system is the only database that we support that uses Oracle for the database engine (we also support a version of Pervasive, two versions of Microsoft’s SQL Server, mysql, and various flavors of Microsoft Access), so we are not able to retain a full time Oracle expert to administer our database.  A more sophisticated database administrator would be able to configure hot backups to run safely (which would not require the database to be down), or would be able to configure RMAN to perform backups, which is integrated into the Oracle administrative tools.

So, the technology is there, but the expertise is outside of our current capabilities.  Surprising?  Probably not.  Every database technology in the end performs a set of similar tasks – the ability to store and retrieve data in an efficient manner.  However, how this simple idea is implemented varies widely across various database engines and operating systems, and expertise has been developed around each version.  The typical corporate business IT department is unlikely to have expertise in this area in-house because of the relative cost of the resource compared to the relative utility of that resource to the organization in comparison to other priorities.  Smaller IT departments generally are made up of a number of generalists who have broad but relatively shallow knowledge about the numerous systems and components that the IT department babysits for the company.

Expertise not maintained in-house must be contracted with from an outside pool of IT experts.  However, there is no standard or certification to objectively evaluate external expertise (as there is for physicians and lawyers, both of whom must pass a state-sponsored certification exam).  In addition, many IT departments elect to maintain control via in-house staff for critical systems, even if there are more expert staff available to them.

In our case, by design, we elected to depend on the vendor of the health record system for Oracle database support.  Our approach was to call on this expert for dire emergencies.  The inconvenience of our users for Sunday morning backups seemed less than dire, hence we did not seek further advice from the support vendor on how to mitigate this inconvenience.  That meant that we would need to develop some Oracle expertise in-house to do the day-to-day maintenance on the database.  The extent of our knowledge was to use the cold backup technology to perform backups.

If the IT department were to hire a contractor who was an Oracle 10 expert to implement RMAN for backups and recovery, an internal member of the IT department would also need to be trained to operate RMAN, address errors, test the backups to see if they are recoverable, and make modifications to the RMAN configuration as a result to changes to the database (for example, as a consequence of an upgrade to the application).  Over the longer term, the initial cost to configure RMAN is the smaller cost compared to the ongoing maintenance costs of ensuring that RMAN continues to work properly post-implementation.  Additionally, the IT department itself would need to cope with staff turnover – what happens to the knowledge about RMAN when the trained internal resource leaves the organization, or is promoted?

This problem is not really avoided if the department elects to contract with the Oracle consultant for ongoing support, in the sense that in the long term, the consultant may stop providing the service, may become unavailable, or may want to be paid considerably more for his expertise than was originally bargained for.  So, either way, the total cost over the long run has to be balanced against the relative importance of implementing the service, in relation to the longer list of competing priorities for the IT department.  Given the basic kind of economic decisions made by small IT departments, inconveniencing a few users on Sunday mornings will almost always cost less than the relative expense and difficulty of a more sophisticated system.

As to the second point, users often presume that IT staff watch their every move like a bunch of voyeurs at the digital keyhole.  As technology has developed, so have the tools for monitoring user  activity.  But the truth of the matter is that we do not have enough time typically to review this activity, unless there is a problem or issue.  And in the example above, while we may have been able to detect that the user was logged in, there was no way to know if the user was reading the news on Yahoo! or typing the thirteenth page of his graduate thesis.

Could we do better?  Of course.  Having a larger budget would mitigate the decision making that IT departments engage in because of scarce resources.  As to the problem of kicking users out – we made a point of doing our best of posting notice of unanticipated outages during business hours, but there is a limit to how effective notice of regular scheduled outages will be for the hard-headed that insist on working on complicated matters in the middle of our backup schedule.  And you just can’t make everyone happy.

iPhone Security and Corporate Networks

Some brouhaha has been brewing over how the iPhone addresses encryption with Microsoft Exchange.  (See the Article Here on Infoworld).  According to InfoWorld, iPhones prior to version 3.1 of the OS did not accurately report whether they supported encryption locally of data stored on the iPhone.  For some corporate networks, encryption is mandated for devices controlled by the organization that are connected to their Exchange servers.  Apparently, prior to 3.1 of the OS, the iPhone would report that it was encrypted, regardless of whether it was or not, as a way to ensure that the iPhone would connect to the Exchange server.

This fact apparently has some IT and compliance staff in a tizzy, because they may have introduced a number of these devices over blackberrys on the basis that the iPhone would comply with a local encryption policy or organizational requirement.  For example, the Health Insurance Portability and Accountability Act (HIPAA) security regulations, in the technical standards, do address the need for encryption of protected health information (PHI) transmitted over networks.  For some organizations, in order to simplify regulatory compliance, establishing a universal mandate that there be encryption between devices outside of the corporate LAN and sensitive servers in the LAN may be the most sensible approach.  Of course, if we are talking about email, email received and read is generally not encrypted to begin with, whether it is sensitive or not.  That’s because most users of email  find it too complicated to digitally sign an email with their own personal certificate and ensure that the receiving party had a way to decrypt the message with the typical certificate exchange approach to email encryption.

Microsoft Exchange does allow for the transfer of other information (like calendars and tasks), but I would seriously doubt many health organizations use the Microsoft calendar to manage patient appointments or would be putting PHI into either of these data types.  Most of the PHI action in health care facilities is within their charting and practice management systems.  Neither usually integrate or are based on Microsoft Exchange.  So, to establish a blanket policy requiring that remote devices controlled by the organization be encrypted to connect to corporate resources can be a reasonable approach, but the reality is that HIPAA doesn’t automatically mandate that for iPhones.

There should be a documented risk assessment for iPhones that connect to the corporate network which would evaluate the risk of loss of PHI against the cost of mitigating that by encryption (and perhaps other mechanisms like remote wipe).  Encryption should be used if there is a substantial risk of PHI being lost from an iPhone being stolen.  But to establish that, the risk analysis would need to evaluate how often these devices are lost per total phones per year, and how many of the lost phones actually had PHI on them.  My guess is that the likelihood of this would generally be small for most organizations.  The issue, then, is how to make your compliance plan flexible but also enforceable and effective at protecting your PHI.  And that, my friends, is the art of information security!

Linden Labs and Virtual Sex Toys? Huh?

Oh, yeah, there is a lot of kinky virtual sex going on in Second Life.  And to support all of that activity, there are apparently a lot of vendors selling knock-offs of the “real” virtual sex toys of one vendor who is mad enough to sue.  (See Wired Article)

Yes, a few years ago, Linden Labs set up a special “mature” designation for areas in its virtual worlds that were aimed at “adult” conduct, so those under 18 and others with sensitive eyes  would not be offended by what they found.  However, probably much like the real world, virtual sex is rampant in Second Life.  As a consequence, there is a heavy trade in sex-related objects.  According to the plaintiff, Eros Products LLC, his SexGen products line has sold about $1 million (that’s U.S. dollars) within Second Life over the past five years.  (A copy of the Complaint is here)

Vicarious and contributory liability for copyright infringement is recognized by the courts as a cause of action under federal law.  This kind of liability has been raised in recent years by the various music file sharing services that came and went, such as Napster (originally a file sharing service without any copyright licensing from the music companies that owned the music being shared), Gnutella, and Limewire.  Each of these services were held to be liable for the file sharing of their users, in part based on the notion of vicarious liability.  Cases prior to Napster et al. that addressed this kind of liability along two lines: landlord-tenants where the landlord exercised no control over the leased premises, and dance-hall cases where the operator of the hall controlled the premises and obtained a direct financial benefit from the infringing performances.  Fonovisa, Inc. v. Cherry Auction, Inc., 76 F.3d 259 (9th Cir. 1996).  Under common law, landlords have not been held to have copyright liability where dance-hall operators have.

In Fonovisa, the defendant operated a “swap meet” where the operator rented stalls to individuals who were selling unlicensed copies of bootlegged music owned by the plaintiff.    For the swap meet operator to be liable, the plaintiff had to prove that the operator controlled the marketplace and obtained a direct financial benefit from the sales of infringing works.  The Court sided with the plaintiff in this case, even though Cherry Auction did not receive a commission from the sales of the infringing materials.

Assuming that Eros Products LLC (and other plaintiffs that may join the suit should the court certify this as a class action) can prove that they are the valid owner of the copyrighted works, the question for the court is whether Linden Labs can meet the standard for contributory liability.  Linden Labs is a virtual landlord in the sense that users of Second Life pay an annual subscription in order to “own” virtual real estate within the virtual world.  The right to own this virtual property is limited by payment of the subscription.  You will note, however, that there are plenty of users that do not acquire any virtual real estate in Second Life – and for them, there is no fee to participate.

However, Linden Labs also charges fees for the conversion of Linden Dollars into U.S. Dollars through the Linden Exchange.  For infringers seeking to sell pirated works in the virtual world, the real benefit to them is the ability to take the proceeds of those sales and convert them back into hard currency for use in the real world.  Approximately 250 Linden Dollars are worth a U.S. Dollar (the trading in this currency fluctuates).  In order to convert Linden Dollars back to U.S. Dollars, Linden Labs charges a fee of 3.5% of the value of the transaction.  So, indirectly, Linden Labs benefits from the sale of infringing goods every time that the infringer converts his Linden Dollar proceeds to hard currency.

There is a question, however, of whether Linden Labs is merely a landlord who relinquished control to his infringing tenant.  Eros Products LLC claims that Linden Labs did exercise control over the activities of its users because all of the virtual worlds within Second Life are ultimately housed on servers controlled by Linden Labs.  Pl.’s Complaint at ¶ 127-128.  And furthermore, Linden Labs has ultimate control over its software that operates Second Life, and I suppose that Linden Labs could alter its software to prevent copyright infringement if it wished to do so (how, exactly, is another story).  Factually, however, I think this is going to be tough to prove.  Unlike Grokster, who marketed itself as the successor to Napster for those looking to willfully infringe on the copyrights of others, Linden Labs has not marketed itself as a safe haven for willful copyright infringers.  On the contrary, Linden Labs gave some thought to copyright in its license agreement, granting its users rights in the works they create in-world.  (See Terms of Service here at section 3.2)

The other question is whether Linden Labs, in light of the DMCA, fits within the safe harbor established for internet service providers, shielding it from liability for the infringing acts of its users.  More on that in another post.  Stay tuned!

IT Changing the Law Profession?

There are an incredible number of lawyers in the United States today – estimated at more than 1 million and growing, based on the number of students enrolling in law schools across the country. Technology is changing how we do law.

The number of lawyers providing legal services has pushed lawyers to become specialists so that individual attorneys can differentiate themselves within the legal services market.  The ABA, which claims a membership in excess of 400,000 members, is probably the largest association of attorneys in the U.S.  There are about 35 different sections of law that an attorney could join, with several subsections and numerous committees within each section of law – all representing a specialty area of knowledge.

As a result of its size, the ABA has a global reach and has an impact on legislation at the state and federal level.  The ABA also presents opportunities for attorneys to meet and work with other attorneys, and to learn about a legal issue or area from an expert in the field.  These learning and working opportunities have been expanded by technology.  For example, ABA’s web site contains a substantial amount of knowledge and information for attorneys.  ABA committees regularly meet by phone to plan for events and activities.  In fact, last year I attended a conference in Second Life on the use of virtual worlds by attorneys and professionals.

Legal research and management has also been changed by the advent of Westlaw and Lexis, giving attorneys access to an unprecedented amount of information (if you can afford the costs for searching their databases) without the need to travel to a library or to purchase a private library.  And Google, blogs, twitter, youtube, facebook, and linkedin have all added another layer of interaction and knowledge sharing.  The question for attorneys is whether there is more to come.

Specialization inherently requires specialized knowledge that is generally unique or rare in the market, allowing the owner of that knowledge to exploit it.  The general knowledge of an attorney of the legal system in general is not very rare – most attorneys have the same or similar working knowledge of the courts, research tools, and document formatting as a result of the standardization of law school curricula across the country.  While there is some degree of differentiation between attorneys in terms of skill in these basic knowledge areas, the difference between attorneys here is not enough to impact the market for legal services substantially.

However, there are relatively few experts in equestrian law, for example.  Within the MSBA solo listserv, I can think of only one attorney in Maryland that specializes in this area.  So, if you have an issue with a horse, that’s the attorney to call.  There are relatively few attorneys that specialize in non-profits.  And there is long list of specialists in other areas.  Statutes sometimes create specialists.  For example, when the Copyright Act was revised in 1976, there were initially few experts of the revised Act.  As time has progressed, more attorneys have entered the field of copyright law so that now there are a fair number of attorneys that can help you register a copyrighted work or litigate an infringement claim.

This move towards specialization within the market also tends to lead towards competition, which pushes down the value of legal services in a particular area as there are more market entrants in the area of specialty.  So, the value of registering a copyright at the Copyright Office, beyond the fee charged by the Office for the registration, is not very high.  Plenty of attorneys can help a client fill out the form and attach the appropriate number of examples of the work and mail it to the Copyright Office.

This trend led one author, Richard Susskind, to write The End of Lawyers? Rethinking the Nature of Legal Services.  The book discusses the tendency for legal work to go from “bespoke” or a highly specialized experience for each client (like trial litigation) to commodity (such as tax compliance software developed by Anderson & Cooper and now marketed by Deloitte Touche) with time as the result of developments in information technology.  Richard Granat, a Maryland attorney who works from Florida, has developed “Direct Law,” which is an automated document assembly system for attorneys and is available on a subscription basis.  This system utilizes specialized knowledge in order to automate legal service delivery by establishing business rules that are defined by a specialist attorney and writing those rules into the application that is used to assemble the documents.

There will be more of this sort of thing over time, which will expand the supply of attorneys that can deliver more specialized legal services.  Increasing supply inevitably leads to a lowering of the cost per transaction to clients, while still maintaining a minimum level of quality as the information system consistently enforces the applicable business rules.  In turn, this pushes more attorneys to find a new specialized area, creating another market for automation.  Maybe Ray Kurzweil is right – in the future, we will all be small business owners that employ automated systems to generate revenue.  I suspect that some attorneys fear this future will make them unemployed.  What do you think?

Banished? Who Knew Facebook Was Sovereign?

Facebook has threatened to “banish” users that buy friends on the popular social media web site, according to Yahoo.  The Australian company offering to sell you friends or fans, uSocial, is actually offering for sale what you otherwise just have to buy from Facebook directly through advertising.  Facebook has claimed that the method that uSocial is using, logging into other user accounts to make them a fan or friend of the business that paid it, violates Facebook’s terms of service.  But, my bet is more that Facebook just doesn’t appreciate the competition with its own ad service.  Besides, who made Facebook king anyway?  Oh yeah – its 200 million users.  Uh oh.

Snow Leopard (OS X 10.6)

Apple released Snow Leopard this past Friday, August 28, to the general public.  Version 10.6 of their operating system has been billed as primarily a “behind the scenes” improvement in OS X, building on the technology that runs Apple’s computers and smartphones today.  (See Wired Article)  I decided I would go ahead and install 10.6 on my first generation Intel Macbook and also on my Macbook Air.

So far, so good.  I use Parallels to operate a virtual Windows XP machine.  Parallels 3 is not compatible with OS X 10.6.  I therefore upgraded to version 4 of Parallels, and my virtual machine works again.  I did run into some problems getting the upgrade to run (my VM was left in a paused state before the upgrade, and I could not open it in Snow Leopard to stop it.  There is a workaround available for this on Parallels web site if you search for it in help).  And, sadly, my venerable Lexmark Z715 lacks drivers in 10.6, and none are available (or planned by Lexmark).  But my Z1420 does work just fine, so I can still print to my heart’s content at the office.  Perhaps I will finally break down and get a new printer for the home office.

Other than the few incompatible items listed above, the OS X install was itself rather smooth and did result in saving me between 7 and 10 gigabytes of hard drive space.  In addition, because a number of the processes that run in 10.6 are 64-bit, they run considerably quicker than the prior versions of these items.  64-bit programs take advantage of being able to send instructions that are twice as large as older software to the central processor.  Safari opens more quickly and is more responsive than pre-upgrade, and other software like Mail, iPhoto, and iTunes all are much more efficient, as is Finder.

I am still using Microsoft Word 2004, and unfortunately, Word does not get noticeably quicker under Snow Leopard.  It appears that this version of Office is running as “Power PC” rather than a native 64-bit application, but then it was slow in 10.5 as well.  The Office suite has always run faster on Windows.  Personally, I think Microsoft is just trying to tell us all that we should use their products on Windows.  Perhaps the next major release of this package will be an improvement.

Others have written about the new features and what went where in 10.6.  (See Wired; Leopard Tricks Tips and Tools)   Overall, I think 10.6 is a nice upgrade and worth the $30 for a license.  Compared to other, more disastrous upgrades from our friends at Microsoft, most will not have an issue going from 10.5 to 10.6.  Good luck.

Update September 9

I am still running Snow Leopard on my Macbook and Macbook Air.  I have run into compatibility issues with Quickbooks for Mac 2009 – the program mostly works except it crashes when I attempt to record deposits.  I understand from Intuit’s web site that they are working on a patch for compatibility issues with their product.

I have also noticed that periodically mail gets upset and downloads a duplicate copy of message in my gmail account.  Closing and re-opening the application seems to solve this issue, even though I have no idea why it does that.

Because my Lexmark Z715 no longer works with OS X 10.6, I tried my other handy printer – an HP J3680 all in one printer.  The HP web site claims that the drivers for the J3600 series printers were included with the 10.6 upgrade, but when you check this claim against the Apple support web site, the J3600 series is notably missing from the compatible list.  So when I try to add this printer to my MBA, OS X tries to connect me to Apple to get a mysterious update that will provide the driver.  Needless to say, no update is forthcoming.  Perhaps HP will fix their driver for this printer series, which would save me the trouble of buying a new printer for home.

In spite of these issues, I still think the upgrade was a reasonable one.  Compared to other upgrades, the inconvenience has been generally small, and besides, the problems seem to be tied to some of the big vendors for software and hardware that ought to be more on the ball.  Isn’t that what the Microsoft people always say when stuff stops working after an upgrade?

Update September 15

Intuit released and re-released a patch which has addressed the issues I had with Quickbooks 2009 on OS X 10.6, so all is now well with that program.

In addition, I noticed last night that I was able to connect to the HP J3680 at home, though I am not able to use this printer through my older airport express.  It works just fine, however, shared through my other Macbook, and I am also now able to scan pdf files to the Macbook from this printer.

The upgrade has gone relatively smoothly, all things considered, and now that I can print again at home, I am gearing up to destroy a forest!

Health Policy & U.S. Healthcare

I usually do not write about health policy in the U.S. because it is somewhat outside of my area of expertise, but I have been thinking about the issues with health care reform this year and thought I would provide some analysis.  Watching the news, there seems to be a lot of resistance to health care reform this year.  The cost for reform is one of the big stumbling blocks – given the actual price tag to the country that was floated by the various agencies charged with analyzing such things.  However, if you think about it, our current, unreformed system of health care results in the insured paying for more than their own care.

For one, let’s talk about the uninsured.  There are approximately 50 million Americans that lack health coverage today in the U.S.  This does not mean that this people do not get any health care.  To the contrary, well over 10 million Americans get health care from Federally Qualified Health Centers (FQHCs), a significant proportion of which are people without health insurance.  In addition, there are a substantial number of other health care entities that provide health care to the uninsured at low or no cost, but are not yet federally qualified to do so.  FQHCs are funded by the federal government today, at a cost of about $2 billion.  We taxpayers pay for this.  We are subsidizing this care today.  Other entities that provide free or subsidized care do so through private grants, which some of us subsidize today through charitable donations, the United Way, or so forth.

Second, some have been claiming that health reform in the U.S. will just lead to a lot of people waiting around to get health care.  At least in Maryland, by law, emergency rooms are required to treat whoever shows up in them, whether the patient is having a cardiac arrest or just has the flu.  (See Md. Health-General Code Ann. 19-3a-02(b)(2)(vi) for freestanding emergency centers).  Because of this, there are a fair number of patients that present to the ER who are uninsured.  As an aside, economically speaking, emergency rooms tend to be loss leaders for the inpatient facilities to which they are attached.  What this means is that the ER’s costs are not fully borne by the ER revenue stream from patients and insurers; much of the cost is actually covered by the patients that the ER can admit to the main hospital after initial workup and treatment by the ER physician.  However, that also means that uninsured patients who present to the ER for a non-emergency health condition pass costs along to the main hospital which must be covered by inpatient operations, and by extension to those of us that are insured and go to that hospital.

For example, an uninsured patient that presents with the flu at the ER is treated and sent home.  They may pay little or nothing for the visit, but the visit actually costs $800.  The hospital covers this cost by charging a bit more for every patient who is actually admitted on a per day basis (or other costs that are charged in units).  Some admitted patients won’t be able to pay either, so those that can also end up paying a bit more to cover uninsured admitted patients and uninsured ER-only patients.  So if you have private insurance today, your rates are set in part based on the actual costs of providing health care to uninsured patients who can’t afford to pay on their own, because the hospital has to pass the costs of treating these patients to someone who can pay the hospital.

If health reform meant that everyone would now go to their local ER, regardless of what the condition or illness was, this would be a bad idea.  Any time that I have been to an emergency room, there is a queue; the waiting room is always full no matter the time or the season.  However, if health reform actually could redirect patients that do not have emergency health issues to an alternate resource that they could actually afford and would see them, this would help improve an existing “wait problem” for care today, while simultaneously reducing the actual costs borne by hospitals for non-emergent ER visits (which should mean that we can pay less per visit to the inpatient section of the hospital).

And speaking of waiting around – the truth is that even insured people tend to wait for health resources to be available under the current U.S. system of care delivery.  Many doctors have 3-6 week lead times for scheduling in advance, their schedules are crowded with overbooks and double books, they often run late because of inefficiencies with the schedule and with administrative tasks; in short, competent physicians usually have too much demand for their services.  This causes queuing.  Health reform may not really be able to address this problem head on.  Part of it is a technology issue; there could probably be developed a more sophisticated scheduling algorithm that would help to improve scheduling patients for care delivery, and allow for overflow to other providers, etc.  But part of the problem is insufficient health care delivery points on the map.  We apparently need more physicians to treat us.

Third, there are a fair number of U.S. personal bankruptcies each year (granting that the rates have been higher than normal this year because of the recession).  Lack of insurance and a large, unpaid medical bill are a primary cause of personal bankruptcies.  On the surface, if you haven’t filed bankruptcy yourself you might think that this has no effect on you.  But bankruptcy is a bad thing generally.  For one thing, the person who files for protection and has a $100,000 medical bill they have no hope of paying is injured because the cost to them for credit post-bankruptcy is considerably higher than pre-bankruptcy.  In addition, a bankruptcy will limit the person’s job opportunities, will probably prevent them from gaining security clearance for sensitive jobs in the government, and otherwise limits their economic productivity within the U.S. economy, all of which is bad for the economy and for all of us.

But, in addition, that bankrupt’s medical bills are very unlikely to be paid through the bankruptcy proceeding.  The hospital is likely an unsecured creditor, and they are at the end of the line with their hand out to the bankruptcy trustee.  That “bad debt” is a cost to the medical provider, who will pass it along to patients in the future that will pay for medical care in the form of slightly higher unit costs.

The more bankruptcy filings, the more unsecured creditors that aren’t made whole who are health care providers, the higher the costs of care for everybody else.  So again, us taxpayers are by and large the same people who actually end up paying the medical bills of the bankrupt, either out of our own pockets when we go to the hospital, or through higher health insurance premiums that our employers pass on to us each year, or through our taxes (think Medicaid and Medicare, both of which pay for inpatient stays, both of which are paid for by taxes, and both of which are billed at increasing rates by hospitals, in part to cover overall costs of providing care to patients without insurance who lead to bad debt for the hospital).

Fourth, there was this whole “death panels” claim about health reform, whereby the government was going to establish panels that would deny care to the elderly because they were too expensive.  Of course, this is silly.  If we were going to have such things, we would have a better name for them (maybe, “end of life decision making committee” or better yet “pull the plug on granny committee”, or something else that would be more catchy and might be more alliterative).  But, really, this is tied into the idea that the government, through health reform, would stop a person’s doctor from treating the patient appropriately, perhaps because of cost, or just because the government bureaucrat was a nasty person.  The whole matter is rather bizarre.

But, it also doesn’t speak to what goes on today.  For example, there are committees that determine the priority and qualifications for patients to receive replacement organs because there is a long line and a short supply of organs available.  Plenty of patients die each year for lack of an available replacement organ that was needed.  (See this article for 2008 statistics on this issue)  I don’t know that we call the committees “death panels,” but it is a classic example of a pre-existing queue that results in health care rationing.

Health insurance companies also make decisions about what they will pay for and what they will not.  As far as I know, health insurers don’t decide to pull the plug on the elderly per se, but health plans do make choices for their insured patients about what services the patients will pay for out of pocket (or not receive at all if too expensive for the patient), what drugs are on and off the formulary (you may have to take the generic version of a pill, even if you’d rather take the brand name, for example), along with a host of other choices made ultimately to reduce overall costs to the plan.  In our market system, I suppose you can change plans if you choose to, but because your health plan is often tied to your employer, that would usually mean changing jobs – which is not a very practical way to change your health plan.

So, this whole “death panels” thing is really about trying to “ration” care to patients so that more people get the basics, most likely at the expense of others that can pay for optional services.  I’m sure that isn’t very palatable to patients with the means to pay for their thirteenth tummy tuck, but this is also, more or less, the status quo.  Health care is rationed today by our insurers for most Americans.  If health reform led to a more rational way to provide basic health care to more Americans, it would be the right thing.

And here is the other side to this: let’s say that we did institute a governmental body that would “ration” care.  Such an entity is governed directly by the U.S. constitution and federal law.  Do you really think that such a group would implement the “no life support for patients over 75” rule?  Really?

To summarize, the U.S. spends about 16% of GDP on health care, which is substantially more than most other places in the world.  Approximately $2.4 trillion (that is trillion with a “t”) is spent each year on this, or the total economic output of Italy.  And ultimately, this money comes out of the pockets of those who can actually afford to pay for health care.  By design, that means that those who can’t afford health care are being subsidized today by those that can.  The challenge for health reform is to do a better job at cost redistribution than our present system, either by spreading out costs over a much larger pool (such as all 300 million Americans, rather than over the much smaller pools of employer-sponsored health plans today), increasing efficient delivery of health care (through technology that saves time or increases accuracy and reduces risk of harm), and/or perhaps encouraging more supply of health care  providers to help meet the existing demand for services.

We’ll see what happens.  Stay tuned for developments.