Ants in one’s kitchen can be a pest (and a difficult one to resolve once the ants have found something good to eat), but ants may have a more constructive future annoying cyberthreats in digital form. Haack, Fink, et al. have written a paper on using ant techniques for monitoring and responding to technical security threats on computer networks. As they point out, computer system networks continue to become more complex and challenging to secure. System administrators flinch at the thought of adding a new printer, fax machine or other device because of the increase in monitoring and administrative tasks. This problem is actually getting worse as more devices gain independent IP addresses on networks, like copiers and refrigerators.
Securing these devices poses a monumental task for IT staff. Haack, Fink et al. have proposed an alternate security method based on the behavior of ants in colonies. In their paper, Mixed-Initiative Cyber Security: Putting humans in the right loop, the authors have described at a high level how semi-autonomous bits of code might work in concert to respond appropriately to threats to minimize the amount of human intervention required to address an issue. From a system administrator’s perspective, there are a lot of balls that must be juggled to keep a network secure. Most relatively complex IP devices generate logs that require regular review. In addition, some devices and software will also send email or text alerts based on certain conditions. Windows-based computers have a suite of patching and anti-virus/anti-spyware software that require monitoring and may require review. Internal network devices of a minimum complexity will log activity and output alerts. Border control devices (firewalls) can be very noisy as they attempt to repel attacks and unwanted network traffic from outside of the secure network. Printers create logs and alerts. And the list can go on and on as you begin to examine particular software systems (such as database and mail servers). A lot can go wrong.
Haack, Fink et al. propose a multi-tiered approach to tackling these security issues. The lowest level agents are “sensors” that correlate with ants in real life. Sensors roam from device to device searching for problems and reporting these problems to other sensors and sentinels. For example, you could write a sensor whose only interest is finding network activity on a particular UDP or TCP port above a certain, pre-defined threshold on a device. The sensor would report back to its boss, a “sentinel,” that computer a had an unusual amount of network activity on that port.
Sentinels are in charge of security for a host or group of similarly configured hosts (for example, all Windows file servers, or all Windows XP Professional workstations in a domain). Sentinels interact with sensors and are also charged with implementing organizational security policy as defined by the humans at the top of control hierarchy. For example, a policy might be drafted that requires that all Windows XP workstations shall have a particular TCP port closed. Sentinels would be taught how to configure their hosts to close that particular inbound TCP port (for example, by executing script that enables TCP filtering on all of the Windows XP workstations’ network adapters, or configuring a local software firewall).
Sentinels learn about problems from sensors that come to visit the sentinel. Sentinels can also reward sensors that provide useful information, which in turn encourages more sensors to visit the sentinel (as foraging ants lay down path information that can be read by other ants). Sensors like to be patted on the head by the sentinel by design, so doing so enough leads more sensors to stop by for their pat. Of course, if the sensor has nothing interesting to report, no pat on the head. Sensors that rarely have useful information get taken out of service or self-terminate, while rewarded sensors are used by the sentinels to create more copies. So, if computer problems are like sugar, the ants (sensors) that are the best at finding the sugar are reproduced.
In a computer network, if a known hole is identified in the configuration of a Windows workstation, sensors that are designed to find that hole will be rewarded at the expense of those that are looking for an older problem that has already been patched. The security response to new and evolving problems should therefore modify itself over time as new problems are identified and passed along down the hierarchy to the sensors.
Haack, Fink et al. also discuss the role of sergeant and supervisor (apparently appreciating the alliterative value of having all the roles in the paper start with the letter “S” – who says that computer scientists don’t have a sense of humor?). The sergeant is the coordinator of the sentinels for an organization, and provides information graphically to the supervisor (the human beings that manage the security system). The sergeant is the implementer of organizational policies set by the supervisors (all workstations will have a firewall enabled; the most recent anti-virus definitions will be applied within 1 day of being released by the vendor).
From the paper, I presumed that the sentinels actually carry out changes to host devices when they realize there is a host that is not aligned with an organizational policy based on information from sensors. However, this is not discussed in detail. The authors suggest this in section 3.3 with the reference to sentinels being responsible for gathering information from sensors and “devis[ing] potential solutions” to problems identified. My guess is that tool kits would be written ahead of time by the system developer for implementing certain solutions from identified problems (for example, how to download and apply a recent anti-virus definition file for a particular desktop operating system).
The authors also envision that the sergeant might be granted authority to acquire external resources automatically without seeking prior approval from its human supervisors, at least for certain maximum expenditures. For example, had the anti-virus subscription for definitions expired, a supervisor might grant the sergeant the authority to renew that subscription so long as it cost less than $x to do so. A growing number of software makers have designed subscription services for software updates, many of which cost a set amount per month or year. Most organizations that use these services would budget to pay for the service each year, so automatically authorizing such expenses might make sense. This would also avoid lapses in security coverage that occur today in plenty of organizations that do not have adequate controls over renewal services.
The authors discuss the issue of cross-organizational security in section 1 of their paper, indicating that “coordinated cyber defensive action that spans organizational boundaries is difficult” for both legal and practical considerations. However, the proposed security response described by the authors could be improved if there was a way to securely share information with other organizations operating an “ant” cybersecurity system. Sharing either active or popular threats, or tool kits for quickly responding to specific threats might help to improve an organization’s overall security response, and the larger value of the draft security system.
Information security continues to be a significant issue for most organizations that have increasingly complex information systems. Establishing policies and implementing these policies represents a significant time investment, which many organizations cannot afford to make. The more that security mechanisms can be automated to reduce risk, the greater the value to organizations especially where qualified information security experts are unavailable or too expensive for an organization’s security plan. I’m interested to see where this concept goes in the future, and whether criminals will begin to design security threats to infect the security system itself (as they have in the past for Symantec’s anti-virus software).