Introduction
The advent of "Trusted Computing" (TC) technology as specified by the Trusted Computing Group (cf. sources) has not met much enthusiasm by the Free/Open Source Software (FOSS) and LINUX communities so far. Despite this fact, FOSS based systems have become the preferred vehicle for much of the academic and industrial research on Trusted Computing. In parallel, a lively public discussion between proponents and critics of TC has dealt with the question whether the technology and concepts put forward by the TCG are compatible, complementary or potentially detrimental to the prospects of open software development models and products.

Common misconceptions of TC technology are that it implies or favours closed and proprietary systems, reduces options of using arbitrary software, or remotely controls users' computers. It has long been argued, though, that these and similar undesirable effects are by no means unavoidable, not least because the underlying technology is passive and neutral with regard to specific policies. The actual features displayed by TC equipped platforms will almost exclusively be determined by the design of operating systems and software running on top of it. With appropriate design, implementation and validation of trusted software components, and by using contractual models of negotiating policies, negative effects can be circumvented while improving the system's trust and security properties. This is the intellectual starting point of the EU-supported, collaborative OpenTC research and development project (project Nr. 027635; cf. sources) that started in November 2006.

Combining FOSS and TC technology
OpenTC aims to demonstrate that a combination of TC technology and FOSS has several inherent advantages that are hard to meet by any proprietary approach. Enhanced security at the technical level tends to come at the expense of constraining user options, and the discursive nature of FOSS-development could help to find the right balance here. Trusted software components have to be protected from analysis during runtime, so it is highly desirable that their design is documented and that the source code is available to allow for inspection and validation. Finally, any attempts to introduce TC technology are likely to fail without the buy-in of its intended users, and openness could prove to be the most important factor for user acceptance.

OpenTC sets out to support cooperative security models that can be based on platform properties without having to assume the identifiability, personal accountability and reputation of platform owners or users. For reasons of privacy and efficiency, these models could be preferable to those assuming adversarial behaviour from the outset. A policy model based on platform properties, however, requires reliable audit facilities and trustworthy reporting of platform states to both local users and remote peers. The security architecture put forward by the TCG supplies these functions, including a stepwise verification of platform components with an integral, hardware-assisted auditing facility at its root. In OpenTC, this will be used as a basic building block.

Trusted virtualization and protected execution environments
The goal of the OpenTC architecture is to provide execution environments for whole instances of guest operating systems that communicate to the outside world through reference monitors guarding their information flow properties. The monitors kick into action as soon as an OS instance is started. Typically, the policy enforced by it should be immutable during the lifetime of the instance: it can neither be relaxed through actions initiated by the hosted OS nor overridden by system management facilities. In the simplest case, this architecture will allow to run two independent OS instances with different grades of security lock-down on an end user system. Such a model with an unconstrained "green" environment for web browsing, software download / installation and a tightly guarded "red" side for tax record, banking communications etc. has recently been discussed by Carl Landwehr (2005). More complex configurations are possible and frequently needed in server scenarios.

OpenTC is borrowing from research on trusted operating systems that goes back as far as 30 years. The underlying principles – isolation and information flow control – have been implemented by several security hardened versions of Linux, and it has been demonstrated that such systems can be integrated with Trusted Computing technology (see e.g. Maruyama et al. 2003). However, the size and complexity of these implementations is a serious challenge for any attempt to seriously evaluate their actual security properties. The limited size of developer communities, difficulties of understanding and complexity of managing configurations and policies continue to be road blocks for deployment of trusted platforms and systems on a wider scale.

Compared to full-blown operating systems, the tasks of virtualization layers tend to be simpler. This should allow OpenTC to reduce the size of the Trusted Computing Base. The architecture separates management and driver environments from the core system and hosted OS instances. They can either be hosted under stripped-down Linux instances, or they can run as generic tasks of the virtualization engines. The policy enforced by the monitors is separated from decision and enforcement mechanisms. It is human readable and can therefore be subjected to prior negotiations and explicit agreement.

OpenTC chose (para-)virtualization as the underlying architecture for a trusted system architecture, which allows to run standard OS distributions and applications side by side with others that are locked down for specific purposes. This preempts a major concern raised with regard to Trusted Computing, namely, that TC excludes components not vetted for by third parties. The OpenTC architecture allows to limit constraints to components marked as security critical, while unconstrained components can run in parallel.

OpenTC builds on two virtualization engines: XEN and L4. Both are available under FOSS licenses and boosted by active developer and user communities. Currently, it is necessary to compile special versions of Linux that co-operate with the underlying virtualization layer. However, the development teams will improve their architectures to support unmodified, out-of-the-box distributions as well. This will be simplified by hardware support for virtualization as offered by AMD's and INTEL's new CPU generations. Prototypic results have shown that this hardware support could also allow to host unmodified operating systems other than Linux (see e.g. Shankland 2005).

From trusted to trustworthy computing
TCG hardware provides basic mechanisms to record and report the startup and runtime state of a platform in an extremely compressed, non-forgeable manner. It allows to create a digitally signed list of values that correspond to elements of the platform's Trusted Computing Base. In theory, end users could personally validate each of these components, but this is not a practical option. End users may have to rely on other parties to evaluate and attest that a particular set of values corresponds to a system configuration with a desired behaviour. In this case, their reason to trust will ultimately stem from social trust he puts in statements from specific brands, certified public bodies, or peers groups.

A much discussed dilemma arises if trusted components become mandatory prerequisites for consuming certain services. Even in case such components are suspicious to the end user, they might still be required by a provider. This problem is particularly pronounced if named components come as binaries only and do not allow for analysis. The recent history of DRM technology has shown that trojans can easily be inserted under the guise of legitimate policy enforcement modules. Clearly, a mechanism that enforces DRM on a specific piece of content acquired by a customer must not assume an implicit a permission to sift through the customer's hard disk and report back on other content.

This highlights an important requirement for components that deserve the label "trusted": at least in principle, it should be possible to investigate their actual trustworthiness. A clearly stated description of function and expected behaviour should be an integral part of their distribution, and it should be possible to establish that they do not display behaviour other than that stated in their description – at compile time, runtime, or both. A socially acceptable approach to Trusted Computing will require transparency and open processes. In this respect, a FOSS based approach looks promising, as it might turn openness into a crucial competitive advantage.

The TCG specification is silent on procedures or credentials required before a software component can be called "trusted". OpenTC works on the assumption that defined methodologies, tools, and processes to describe goals and expected behaviour of software components are needed. This way, it will become possible to check whether their implementation reflects (and is constrained to) their description. Independent replication of tests may be required to arrive at a commonly accepted view of a component's trustworthiness which in turn requires accessibility of code, design, test plans and environments for the components under scrutiny.

Trust, risk, and freedom
Most of us have little choice but to trust IT systems where more and more things can go wrong, while our actual insight in what is actually happening on our machines gets smaller by the day. Users are facing a situation of having to bear full legal responsibility for actions initiated on or by their machines while lacking the knowledge, tools and support to keep these systems in a state fit for purpose. Due to the growing complexity of our technology, we will increasingly have to rely on technical mechanisms that help us to estimate the risk prior to entering IT based transactions. Enhanced protection, security and isolation features based on TCG technology will become standard elements of proprietary operating systems and software in due time.

This evolution is largely independent of whether FOSS communities endorse or reject this technology. OpenTC assumes that mutual attestation of the platforms' "fitness for purpose" will become necessary for proprietary systems as well as FOSS based ones. The absence of comparable protection mechanisms for non-proprietary operating or software systems will immediately create problems for important segments of professional Linux users. In fact, many commercial, public or governmental entities have chosen non-proprietary software for reasons of transparency and security. These organizations tend to be subjected to stringent compliance regulations requiring state-of-the-art protection mechanisms. If FOSS based solutions don't support these mechanisms, the organizations could eventually be forced to replace their non-proprietary components with proprietary ones: a highly undesirable state of affairs that OpenTC might help to avoid.

From this perspective, the current discussion about the next version of the GNU public license raises serious concerns. Some of the suggested changes could impact the possibility to combine Trusted Computing technology and Free Software licensed under GPLv3 this refers to the GPLv3 Draft, status 2006-02-07 16:50 (cf. sources). Section 3 of this draft concerns Digital Restrictions Management, a term that has been used by Richard Stallman in discussions about Trusted Computing. For example, the current draft excludes "modes of distribution that deny users that run covered works the full exercise of the legal rights granted by this License". It is an open question whether this might apply to elements of a security architecture such as OpenTC. A Trusted Computing architecture does not constrain the freedom of copying, modifying and sharing works distributed under the GPL. However, it can constrain the option running modified code as a trusted component, since previously evaluated security properties might have been affected by the modifications. Unless a re-evaluation is performed, the properties of modified versions can not be derived from the attestation of the original code; security assurances about the original code become invalid.

This is by no means specific to the Trusted Computing approach; it also applies to commercial Linux server distributions with protection profiles evaluated according to the Common Criteria. The source code for the distribution is available, but changing any of the evaluated components results in losing the certificate. Whether or not software is safe, secure, or trustworthy is independent of the question of how it is licensed and distributed. The option to choose between proprietary and FOSS solutions is an important one and should be kept open. This is one of the reasons why several important industrial FOSS providers and contributors participate in OpenTC. The project aims at a practical demonstration that Trusted Computing technology and FOSS can complement each other. This is possible in the context of the current GPLv2. Whether it will be so under a new GPLv3 remains to be seen.

Sources

Disclaimer
The content of this paper is published under the sole responsibility of the author. It does not necessarily reflect the position of HP Laboratories or other OpenTC members

About the author: Dirk Kuhlmann is a senior research engineer for Hewlett Packard Laboratories in Bristol, UK, where he works as a member of the Trusted Systems Laboratory. He acts as the overall technical lead for the OpenTC project. Contact: dirk.kuhlmann@hp.com

Status: first posted 01/03/06; licensed under Creative Commons; included in the INDICARE Monitor of February 2006
URL: http://www.indicare.org/tiki-read_article.php?articleId=183