Estate Planning in the Digital Age

One event remains certain for all of us, our inevitable end.  Planning for this eventuality is generally a good idea because you can help ensure that the people that survive you will be able to keep on keeping on.  This is why people have, for generations, written wills, powers of attorney, health care agent appointments, living wills or advance directives, and other legal documents.  All of these documents help to explain who is supposed to get what, and how your affairs should be closed out after your death.  The 21st century, however, has created a new set of problems with the rise of technology and the information age.  What happens to your online life when you die?  And how will your heirs access all of these things?

First off, computer security people have drilled into all of us to not share our passwords with others.  Besides having to change these passwords all of the time, users of most commercial information systems are used to having a password personal to them, which sometimes acts as a digital signature authorizing the commercial vendor to do certain things (for example, to trade stocks, post information, or to pay bills from a bank account).  In addition, security experts have also drilled that we should not write down our passwords, or attach them as post-it notes underneath our keyboards.  Furthermore, we have been taught to have different passwords for different services (so that, in the event of a password loss, the damage that might result would be limited to one or a few systems).  As a result, we probably keep a lot of passwords to a substantial number of systems, but we usually don’t tell anyone what these passwords are.  So what happens when we die?

For myself, I am just thinking about the computer passwords that I use on a regular basis: (a) one for my laptop, (b) one each for online banking at several different banks, (c) a passcode for my iPhone, (d) a passcode for my iPad, (e) passwords for blogs that I maintain online, (f) passwords for my web server, (g) passwords for online web sites that I use like amazon.com, ebay.com, iTunes.  I mean, I even had to create an account in order to update the software that programs my remote control for the T.V. at home!  I’m sure that if I sat down and thought about it, I would be able to write an even longer list.  Without help, I doubt my wife or any of my relatives would be able to access much, if any, of this.  Moreover, if I simply wrote out the whole list, I would have to periodically update my passwords for those systems that require that I regularly update (a growing percentage of my online accounts).

There do appear to be some subscription-based services available online today to help address this conundrum.  Dead Man’s Switch is one such service.  Another is called Death Switch.  There may be other services available.  Obviously, you would want to give some thought to what you are providing to the service, and what security is employed by the service that you sign up to use, given that you may end up leaving with it sensitive information to forward to people that you have designated.  I have not used either of these services.  If you are a user, please feel free to post comments to this post on your experience to date.

Stolen Personal Information

Hackers continue to steal data from companies the world over, with a recent victim in Sony.  In that case, Sony apparently delayed reporting the loss to the 77 million users whose data was compromised, including dates of birth and possibly credit card numbers.

In late March, Epsilon reported that hackers had stolen the names and email addresses of individuals who receive business newsletters from Epsilon’s clients, which include a number of well known companies such as Best Buy and Robert Half International.  Considering that Epsilon delivers over 40 billion emails a year for its clients, the chances have gone up of improved, targeted phishing attacks as a result of this breach, particularly for banking customers of banks that have used Epsilon for email marketing.

There should be no surprise that the regulatory penalties for data breaches continues to escalate.  Security breach notification procedures were codified into the 2009 ARRA legislation for health care providers.  ARRA Health Tech Initiatives Section 13402 of the ARRA legislation (on page 17 of the linked pdf file) puts the responsibility on a covered entity to notify its customers of a data breach where unauthorized access is gained to “unsecured” protected health information.  In laymen’s terms, “unsecured” PHI is data that is not encrypted.  So, for example, a typical relational database stores its data in physical files on a computer hard drive or array.  Some database systems encrypt these files so that you could not just open up the file in notepad and read its contents.  If a hacker were to gain physical access to the server where these files were located, he or she might not be able to read them without further access (for example, with an administrator-level username and password to directly query the database).  Notification to patients would not likely be required in this circumstance if you could show the hacker gained physical access but not database-level access.

Does your database encrypt its stored data files?  Not all database software, and not all versions of specific database software, provide for native encryption.  For example, the data files of your Microsoft Access database are not likely to be encrypted.  For performance reasons, data files for MS SQL Server databases may also not be encrypted.  But, even if your database file is encrypted, if the administrator password to the database itself is blank or easy to guess (like “admin”), you may still have trouble brewing back at the server room.

Here is a list published by HHS of data breaches reported to it under ARRA’s notification requirements.  Do you see your physician on this list?  If things continue, you may sooner rather than later!

Disaster Recovery and the Japanese Tsunami

The art of disaster recovery is to plan for what may be the unthinkable while balancing mitigations that are both feasible and reasonable for your organization’s resources and circumstances.  On March 11, Japan was struck by a massive earth quake and tsunami that caused enormous destruction, estimated at a total loss of $310 billion.  Over the last several weeks, one of the major failures has been at the nuclear power complex in Fukushima, home to six nuclear power plants.  This disaster continues, as of the writing of this post, as at least two of the plants continue to be in a critical state because of a failure of the complex’s power and backup power systems that helped to control the temperature of the nuclear fuel rods used to generate power at the plants.

As an unfortunate consequence, many people have been exposed to more radiation than normal, food grown in the area of the plant has shown higher levels of radioactive materials than normal, radioactive isotopes in higher-than-normal concentrations have been detected in the ocean near the plants, and numerous nuclear technicians have been exposed to significant radiation, resulting in injuries and hospitalizations.  As far as disasters go, the loss of life and resources has been severe.  And like other major environmental and natural disasters, the effects of the earthquake and tsunami will be felt for years by many people.

Natural disasters like this one cannot be prevented.  We lack the technology today to effectively predict or control for these kinds of events.  And while these larger scale disasters are relatively rare, planners still need to assess the relative likelihood of such events, and develop reasonable mitigation plans to help an entity recover should such a disaster occur.  Computerized health records present an opportunity to permit recovery in that the data housed by these systems can be cost-effectively backed up and retained at other secure locations, permitting system recovery and the ability to continue operations.  In contrast to digital files, paper records are far less likely to be recovered were a tsunami or other similar natural disaster to occur and wash the records away.

Even the best recovery plan, however, will be severely tested should a major disaster be realized.  Japan was hardly unprepared for a major earthquake, and still is struggling to bring its nuclear facilities under control nearly three weeks later.  However, having a plan and testing it regularly will increase the odds of recovery.  My thoughts are with the Japanese during these difficult times.

Great Firewall Maybe Not So Great

Australia has announced plans to implement mandatory content filtering by its internet service providers for certain kinds of web content, essentially attempting to block all Australian internet users from these categories of sites.  (See Yahoo article here)  China had attempted this sort of thing earlier in 2009, but placed its plans on hold.  These plans had apparently included requiring computer makers that sold computers in China to install filtering software on the computer that would limit user access to certain “objectionable” web sites.

I suspect that one day, the older internet users among us will look back with nostalgia at the days when we could freely look at bestiality, hardcore violence, and sites on freeing Tibet from Chinese rule.  Sadly, today’s debate in Australia seems to be framed as a conflict between the mainstream, ordinary folk against the scum of the earth that produce child porn and other nastiness.  And why would anybody want child porn sites to be available to anyone?  The problem is always in how we define the things to be filtered.  In Australia’s plan, the non-government entity responsible for doing the filtering would receive complaints from the public about a site, and then the entity would filter the site for all.  So, I could claim that yahoo.com is actually a child pornography site and file a complaint.  Hopefully, the entity that reviews these complaints would have a reasonable process to filter the wheat from the chaff, and not automatically add whatever site is complained of to the filter list.

I would also hope there would be some process to be unlisted from the filter with some due process protections if you get an adverse decision by the filtering entity.  But, as you can imagine, this only increases the overall cost of the filtering system, which is passed on to Australian taxpayers, who could probably avoid going to such dirty sites by not clicking on them in the first place, hence saving all a significant amount of time and cost to civil liberties.

I’m not a fatalist in saying, from the start, that this plan is doomed to failure.  Enough talented people put into a room can come up with a workable and effective solution to this problem.  Instead, I think this internet filter concept was created by a group of people with a solution looking for a problem to solve that, so far, does not exist but in the minds of some easily offended by the internet.  In my opinion, public monies would be better spent stopping phishing attacks and similar malicious web sites, and enforcing the existing laws that criminalize the wholesale theft of identity and credit card information that occurs today on the internet.

Seeing Red

The Federal Trade Commission (FTC) promulgated regulations to help reduce consumer identity theft back in 2007, with implementation of these rules for “creditors” and national banks to begin in 2008 (and then 2009, now November 1, 2009 for certain kinds of creditors).  (See the Red Flags Rule here)

Identity theft is a real problem for people of all sorts (approximately 10 million people fall victim to this kind of fraud at a loss of around $50 billion each year).  As a result, the FTC has interpreted the term “creditor” more broadly than the kinds of businesses we tend to think of, like credit card companies.  (See FTC FAQ)  According to the FTC, a creditor includes anyone that provides a service now and accepts payment later.  Lawyers routinely do that, as do health care providers, department stores with lay-away plans, and other service professionals (except maybe your mechanic who won’t give you your car until you pay for the service).  Because of the broad application of the rule by the FTC, the lawyers decided to sue the FTC to force an interpretation of the Red Flags Rule to exclude, you guessed it, lawyers.  (See the ABA Release here)

As a practical matter, federal courts generally will defer to administrative agency interpretations of their own regulations under the Chevron doctrine.  Every so often, courts will overturn an administrative agency’s interpretation, but the odds are low.  (See Massachusetts v. E.P.A., 549 U.S. 497 (2007)).  The ABA’s odds of getting a decision in their favor are probably about average, but in any case, won’t help other kinds of professionals that accept payments from customers over time.  And for lawyers, as no decision is expected before the latest compliance deadline of November 1, 2009, we find ourselves all in the same boat of needing to comply with the Rules.

Section 681.2 requires that covered organizations (a) identify accounts periodically that may be covered accounts within the rules, (b) develop a program for identifying accounts that “is designed to detect, prevent, and mitigate identify theft,” and (c) administer the program by seeking Board approval of the policy, training staff, and monitoring the program over time to ensure that it is overseen properly.   16 C.F.R. § 681.2(c)-(e).  The program must be in writing, and must be reasonable in relation to the size of the organization implementing it.

The Appendix to section 681 provides some guidelines for covered organizations in formulating their Red Flags Program.

The Red Flag Rules also require that creditors establish a written policy that outlines how the organization will comply with the rules.  For health care providers looking for a sample policy for compliance, the AMA has published one on its web site here.  The FTC has also published a document for creditors who are probably at low risk for identity theft here, which may likely include many solo and small law firms.

Once you have appropriately assessed your risks and written a plan, the plan must be approved by the ownership of your organization.  For solo and small firm attorneys who are already chief cook and bottle washer, that means you.  Larger corporations that have a board of directors will need to take board action to approve and be involved in the organization’s compliance with its program.

The guidelines emphasize that a creditor should exercise reasonable care to protect its covered consumer accounts from theft or unauthorized access.  Implicitly, this means that a covered organization should have appropriate data security systems in place that protect the organization’s data from loss, unauthorized access, or theft.  Health care providers should already by compliant as they have been required to comply with the HIPAA security regulations since 2003.  These regulations require regular technical risk assessments, mitigation plans, access control mechanisms, and data backup plans (among other requirements in the rules – See 45 C.F.R. § 164 et seq.).

Lawyers, however, may not have had the pleasure of complying with these rules (unless of course you are a business associate to a covered entity and are now, under the ARRA, required to fully comply with the HIPAA security regulations next year that already apply to covered entities).  For example, if an attorney accepts payments for services through a web site, the attorney should evaluate the risk of identity theft from the site and take appropriate steps to mitigate those risks, such as ensuring she is using a current SSL certificate to encrypt communications with the client, not storing credit card numbers in a database that can be accessed from the internet, and appropriately maintaining the server that houses the web site to ensure it is patched for known security risks and has appropriate anti-virus software.

From there, staff will need to be trained on identifying that a consumer’s identity has been stolen, and to take appropriate actions to protect the consumer from further loss.  The FTC form also indicates that outside agencies such as a billing agency may also need to be trained (or you need to verify that that organization has its own acceptable policy for complying with the rules).  After that, the program requires an internal annual report on activities, and updating the program to address evolving threats to consumer identities.  Now that wasn’t so bad, was it?

Facebook and Twitter: Implications for Your Business?

Technology presents us with new opportunities and challenges on a regular basis.  Social networks and other “web 2.0” applications are starting to make inroads into the mainstream of the internet (ask how many of your iPhone-using friends have apps for one or both of these to measure the reality of the hype).  As a result, staff at your business are bringing their internet usage habits into the workplace.  Prospective customers are looking for you through these tools.  And business owners may want to consider the implications for their organizations.

IT departments at most organizations have struggled with having an effective internet usage policy for staff with internet access.  The difficulty has been in balancing the security of the network from viruses and other security threats against the need of users to access internet resources for business purposes.  The rise of google as a synonym for searching the web has increased the overall utilization of the internet as a business research tool.  Trying to keep inappropriate content from appearing in search results poses a real challenge for IT departments.

In addition, with the advent of more sophisticated attacks from web sites, IT departments have struggled to block phishing and other infectious sites and patch their organization’s computers to be resistant to attacks from the internet.  Facebook and twitter have both been used by malicious users to launch attacks on users of these sites (either by writing malicious applications and publishing them on facebook, or by posting malicious links in twitter postings).  The unfortunate knee-jerk reaction of most IT departments is to simply block these sites at the corporate firewall, preventing staff from having any access to these internet resources.

The typical rationale has been that these are not work-related sites, and staff are just wasting time using them on the clock, therefore, shutting down access to them at work is perfectly reasonable.  But, that rationale may no longer work as the web 2.0 world begins to take shape.  For one thing, more businesses are establishing fan pages on facebook in order to advertise their services and provide information to their customers.  Innovative businesses also may develop applications for facebook that are both popular and help to advertise the services offered by the organization.  Businesses also use twitter to keep customers in the loop on activities and events of the company, or monitor twitter to evaluate how its own advertising campaign may be progressing in reaching certain demographics.

Web 2.0 technologies are becoming more pervasive on the internet, which also increases the minimum skill sets of staff working for organizations that use web technologies to reach customers.  Blocking these technologies from the corporate network may result in a less-skilled workforce.  And, ultimately, according to Gartner, such efforts are futile and bound to fail because of the pervasive nature of these technologies.  (See CNET article)

It would seem that liberalization of internet use policies at companies, then, is an inevitable result.  And with that increased access comes new responsibilities for staff and businesses.   A landlord sued a former tenant for defamation earlier this year as a result of some tweets by the tenant about mold in her apartment.  (See article here)  Twitter itself is a rather informal medium for posting information online – similar to having an instant message chat in the chat rooms of yesteryear (which seem so quaint today).  And because it streams posts real time, you may say something that you later regret.  Imagine, for example, that your business allows access to twitter, and one of your employees angrily posts a series of defamatory tweets about a competitor or vendor.  Your organization may be slapped with a lawsuit if that competitor is monitoring twitter for tweets mentioning it by name.

Facebook represents similar challenges for organizations, especially where employees may blur the line between their social lives and work lives by forming, for example, groups on facebook of other employees.  Suppose a group of employees creates a group for only certain kinds of employees from your organization, and intentionally excludes others (perhaps on the basis of gender or age).  Is your organization discriminating against the excluded group?  Does your organization have liability for the acts of your employees in forming the exclusive group?

The web can also present a trade secret leak for those of you that have proprietary information or processes that are used by your business to generate revenue.  Social media also present challenges for protecting intellectual property, and avoiding infringement claims by others (tarnishment of famous marks on twitter – I’m sure a case is brewing as I type this story).

These questions are unanswered.  And I don’t offer these hypotheticals to scare your organization into shutting down the internet connection at the office.  My point is to encourage your organization to think about your policies related to internet usage and what constitutes acceptable use of the internet during normal work hours.  Establishing an effective policy, and consistently enforcing that policy with your staff goes a long way to managing your exposure to a law suit.  Controlling the internet at the organization’s firewall is unlikely to be a sufficient risk management tool.

There are a number of good starting points for a good internet usage policy for organizations.  Here are some principles to consider when drafting yours:

  1. Empower staff to be responsible for their internet usage.
  2. Disrespectful communication is not acceptable, whatever the medium of communication.
  3. Do not download and install software from the internet that is not approved by your IT staff.
  4. Use the internet for professional reasons.
  5. Be mindful that staff representations online reflect on the reputation of their employer.
  6. There are real-world consequences for staff that abuse access to the internet.

If your organization uses facebook or twitter today to market itself, re-enforce with your staff that organizational posts should be approved prior to posting on the web.  The immediacy of these services should be resisted by staff in order to ensure a consistent and accurate message is communicated to the outside world.

Lost Data in the Cloud: How Sad

The headlines are ablaze because somebody over at the company, Danger, upgraded a storage array without making a backup, and voila – bye bye T-Mobile contact data.  (See the article on The Washington Post here)  Nik Cubrilovic’s point in his article is that data has a natural lifecycle, and you should be able to survive without your contacts on your phone.  But he also makes the point that all sysadmins have memories of not being able to recover some data at some point, and sweating out bullets as a result.  His commentary is: this stuff is hardly as reliable as we expect it to be.  “Cloud” computers are no different, except that they are generally managed by professionals that increase the odds of successful recovery as compared to the basement enthusiasts.

Having a backup plan is important.  Testing your backups periodically is important.  But generally, the rule is that the most important data gets the most attention.  If you have to make a choice between backing up your T-Mobile contacts and your patient’s health records, the latter probably will get more attention.  That’s in part because there are laws that require more attention to the latter.  But it is also because you probably won’t die if you can’t call your aunt Susan without first emailing your mom for her number.  You can die if your doctor unknowingly prescribes you a medication that interacts with something not in your chart because of data loss.

But the bottom line with this: data loss is inevitable.  There is a tremendous amount of data being stored today by individuals and businesses.  Even the very largest and most sophisticated technology businesses on Earth have had recent data losses that made the headlines.  But the odds of data loss by doing nothing about backups are still higher than if you at use a cloud service.  Oh, and if you use an iPhone with MobileMe, it synchs your contacts between your iPhone and your computer and Apple’s http://www.me.com, so you actually have three copies of your contacts floating around, not just a copy on the “cloud.”  Maybe you T-Mobile people aren’t better off by “sticking together.”

Big Dilemmas for Web Security

The federal government is getting into the fray over internet security in a national crisis.  (See Yahoo Article here)  A Senate committee considered and then promptly dropped language in a cybercrime bill that would have authorized the President to shut down internet traffic to compromised web sites.  This comes in the larger context of trying to set policy on technical security for the nation, in light of our increasing dependency on the framework created by the internet.  Assuming that the shutdown of a web site was technically feasible, a war-time President would likely have the authority to do so, whether Congress passed a law about it or not.  See U.S. Const. Art. II Sect. 2 cl. 1.  As a practical matter, if the President could allow for the rounding up of U.S. citizens during World War II solely because of their race, I think the President can safely assume that shutting down a web site would be constitutional.  See Korematsu.

The difficulty today, however, is that following 9/11, President Bush asserted that we are constantly at war with terrorists.  Unlike a more traditional notion of war which has a relatively clear start and end, defining war in this manner means that the President is constantly acting within his war powers.  I don’t think the founders of our nation intended for us to have a king, or contemplated that we would be in a constant state of war.  And the danger is that the President would exercise the power to shut down certain web sites deemed a security risk, without much recourse for the web site owner.  So sites that might have an infection could be shut down, but so could those that disagreed with Presidential policies.

The risk to our internet infrastructure is real.  The authors of computer viruses today have come a long way from the kids of the 1990’s that were trying to annoy you.  Major web sites like yahoo.com and malicious ads on Google’s AdWords have been infected with viruses that would then attack users of that web site, potentially infecting many millions of computers.  Our ability to effectively respond to such problems is directly related to how well we prepare for their realization.  Perhaps instead of delegating such broad authority to the President, we should instead work on delegating power to act under more specific circumstances which would better balance the free speech rights of web site operators against the technical security needs of the nation.

Ant CyberSecurity

Ants in one’s kitchen can be a pest (and a difficult one to resolve once the ants have found something good to eat), but ants may have a more constructive future annoying cyberthreats in digital form.  Haack, Fink, et al. have written a paper on using ant techniques for monitoring and responding to technical security threats on computer networks.  As they point out, computer system networks continue to become more complex and challenging to secure.  System administrators flinch at the thought of adding a new printer, fax machine or other device because of the increase in monitoring and administrative tasks.  This problem is actually getting worse as more devices gain independent IP addresses on networks, like copiers and refrigerators.

Securing these devices poses a monumental task for IT staff.  Haack, Fink et al. have proposed an alternate security method based on the behavior of ants in colonies.  In their paper, Mixed-Initiative Cyber Security: Putting humans in the right loop, the authors have described at a high level how semi-autonomous bits of code might work in concert to respond appropriately to threats to minimize the amount of human intervention required to address an issue.  From a system administrator’s perspective, there are a lot of balls that must be juggled to keep a network secure.  Most relatively complex IP devices generate logs that require regular review.  In addition, some devices and software will also send email or text alerts based on certain conditions.  Windows-based computers have a suite of patching and anti-virus/anti-spyware software that require monitoring and may require review.  Internal network devices of a minimum complexity will log activity and output alerts.  Border control devices (firewalls) can be very noisy as they attempt to repel attacks and unwanted network traffic from outside of the secure network.  Printers create logs and alerts.  And the list can go on and on as you begin to examine particular software systems (such as database and mail servers).  A lot can go wrong.

Haack, Fink et al. propose a multi-tiered approach to tackling these security issues.  The lowest level agents are “sensors” that correlate with ants in real life.  Sensors roam from device to device searching for problems and reporting these problems to other sensors and sentinels.  For example, you could write a sensor whose only interest is finding network activity on a particular UDP or TCP port above a certain, pre-defined threshold on a device.  The sensor would report back to its boss, a “sentinel,” that computer a had an unusual amount of network activity on that port.

Sentinels are in charge of security for a host or group of similarly configured hosts (for example, all Windows file servers, or all Windows XP Professional workstations in a domain).  Sentinels interact with sensors and are also charged with implementing organizational security policy as defined by the humans at the top of control hierarchy.  For example, a policy might be drafted that requires that all Windows XP workstations shall have a particular TCP port closed.  Sentinels would be taught how to configure their hosts to close that particular inbound TCP port (for example, by executing script that enables TCP filtering on all of the Windows XP workstations’ network adapters, or configuring a local software firewall).

Sentinels learn about problems from sensors that come to visit the sentinel.  Sentinels can also reward sensors that provide useful information, which in turn encourages more sensors to visit the sentinel (as foraging ants lay down path information that can be read by other ants).  Sensors like to be patted on the head by the sentinel by design, so doing so enough leads more sensors to stop by for their pat.  Of course, if the sensor has nothing interesting to report, no pat on the head.  Sensors that rarely have useful information get taken out of service or self-terminate, while rewarded sensors are used by the sentinels to create more copies.  So, if computer problems are like sugar, the ants (sensors) that are the best at finding the sugar are reproduced.

In a computer network, if a known hole is identified in the configuration of a Windows workstation, sensors that are designed to find that hole will be rewarded at the expense of those that are looking for an older problem that has already been patched.  The security response to new and evolving problems should therefore modify itself over time as new problems are identified and passed along down the hierarchy to the sensors.

Haack, Fink et al. also discuss the role of sergeant and supervisor (apparently appreciating the alliterative value of having all the roles in the paper start with the letter “S” – who says that computer scientists don’t have a sense of humor?).  The sergeant is the coordinator of the sentinels for an organization, and provides information graphically to the supervisor (the human beings that manage the security system).  The sergeant is the implementer of organizational policies set by the supervisors (all workstations will have a firewall enabled; the most recent anti-virus definitions will be applied within 1 day of being released by the vendor).

From the paper, I presumed that the sentinels actually carry out changes to host devices when they realize there is a host that is not aligned with an organizational policy based on information from sensors.  However, this is not discussed in detail.  The authors suggest this in section 3.3 with the reference to sentinels being responsible for gathering information from sensors and “devis[ing] potential solutions” to problems identified.  My guess is that tool kits would be written ahead of time by the system developer for implementing certain solutions from identified problems (for example, how to download and apply a recent anti-virus definition file for a particular desktop operating system).

The authors also envision that the sergeant might be granted authority to acquire external resources automatically without seeking prior approval from its human supervisors, at least for certain maximum expenditures.  For example, had the anti-virus subscription for definitions expired, a supervisor might grant the sergeant the authority to renew that subscription so long as it cost less than $x to do so.  A growing number of software makers have designed subscription services for software updates, many of which cost a set amount per month or year.  Most organizations that use these services would budget to pay for the service each year, so automatically authorizing such expenses might make sense.  This would also avoid lapses in security coverage that occur today in plenty of organizations that do not have adequate controls over renewal services.

The authors discuss the issue of cross-organizational security in section 1 of their paper, indicating that “coordinated cyber defensive action that spans organizational boundaries is difficult” for both legal and practical considerations.  However, the proposed security response described by the authors could be improved if there was a way to securely share information with other organizations operating an “ant” cybersecurity system.  Sharing either active or popular threats, or tool kits for quickly responding to specific threats might help to improve an organization’s overall security response, and the larger value of the draft security system.

Information security continues to be a significant issue for most organizations that have increasingly complex information systems.  Establishing policies and implementing these policies represents a significant time investment, which many organizations cannot afford to make.  The more that security mechanisms can be automated to reduce risk, the greater the value to organizations especially where qualified information security experts are unavailable or too expensive for an organization’s security plan.  I’m interested to see where this concept goes in the future, and whether criminals will begin to design security threats to infect the security system itself (as they have in the past for Symantec’s anti-virus software).

iPhone Security and Corporate Networks

Some brouhaha has been brewing over how the iPhone addresses encryption with Microsoft Exchange.  (See the Article Here on Infoworld).  According to InfoWorld, iPhones prior to version 3.1 of the OS did not accurately report whether they supported encryption locally of data stored on the iPhone.  For some corporate networks, encryption is mandated for devices controlled by the organization that are connected to their Exchange servers.  Apparently, prior to 3.1 of the OS, the iPhone would report that it was encrypted, regardless of whether it was or not, as a way to ensure that the iPhone would connect to the Exchange server.

This fact apparently has some IT and compliance staff in a tizzy, because they may have introduced a number of these devices over blackberrys on the basis that the iPhone would comply with a local encryption policy or organizational requirement.  For example, the Health Insurance Portability and Accountability Act (HIPAA) security regulations, in the technical standards, do address the need for encryption of protected health information (PHI) transmitted over networks.  For some organizations, in order to simplify regulatory compliance, establishing a universal mandate that there be encryption between devices outside of the corporate LAN and sensitive servers in the LAN may be the most sensible approach.  Of course, if we are talking about email, email received and read is generally not encrypted to begin with, whether it is sensitive or not.  That’s because most users of email  find it too complicated to digitally sign an email with their own personal certificate and ensure that the receiving party had a way to decrypt the message with the typical certificate exchange approach to email encryption.

Microsoft Exchange does allow for the transfer of other information (like calendars and tasks), but I would seriously doubt many health organizations use the Microsoft calendar to manage patient appointments or would be putting PHI into either of these data types.  Most of the PHI action in health care facilities is within their charting and practice management systems.  Neither usually integrate or are based on Microsoft Exchange.  So, to establish a blanket policy requiring that remote devices controlled by the organization be encrypted to connect to corporate resources can be a reasonable approach, but the reality is that HIPAA doesn’t automatically mandate that for iPhones.

There should be a documented risk assessment for iPhones that connect to the corporate network which would evaluate the risk of loss of PHI against the cost of mitigating that by encryption (and perhaps other mechanisms like remote wipe).  Encryption should be used if there is a substantial risk of PHI being lost from an iPhone being stolen.  But to establish that, the risk analysis would need to evaluate how often these devices are lost per total phones per year, and how many of the lost phones actually had PHI on them.  My guess is that the likelihood of this would generally be small for most organizations.  The issue, then, is how to make your compliance plan flexible but also enforceable and effective at protecting your PHI.  And that, my friends, is the art of information security!